Publications

Results 73001–73200 of 96,771

Search results

Jump to search filters

Effects of composition on the mechanical response of alumina-filled epoxy

Montgomery, Stephen M.

The effect of composition on the elastic responses of alumina particle-filled epoxy composites is examined using isotropic elastic response models relating the average stresses and strains in a discretely reinforced composite material consisting of perfectly bonded and uniformly distributed particles in a solid isotropic elastic matrix. Responses for small elastic deformations and large hydrostatic and plane-strain compressions are considered. The response model for small elastic deformations depends on known elastic properties of the matrix and particles, the volume fraction of the particles, and two additional material properties that reflect the composition and microstructure of the composite material. These two material properties, called strain concentration coefficients, are characterized for eleven alumina-filled epoxy composites. It is found that while the strain concentration coefficients depend strongly on the volume fraction of alumina particles, no significant dependence on particle morphology and size is observed for the compositions examined. Additionally, an analysis of the strain concentration coefficients reveals a remarkably simple dependency on the alumina volume fraction. Responses for large hydrostatic and plane-strain compressions are obtained by generalizing the equations developed for small deformation, and letting the alumina volume fraction in the composite increase with compression. The large compression plane-strain response model is shown to predict equilibrium Hugoniot states in alumina-filled epoxy compositions remarkably well.

More Details

Improving electronic structure methods to predict nano-optoelectronics and nano-catalyst functions

Leung, Kevin L.; Shelnutt, John A.

This report focuses on quantum chemistry and ab initio molecular dynamics (AIMD) calculations applied to elucidate the mechanism of the multi-step, 2-electron, electrochemical reduction of the green house gas molecule carbon dioxide (CO{sub 2}) to carbon monoxide (CO) in aqueous media. When combined with H{sub 2} gas to form synthesis ('syn') gas, CO becomes a key precursor to methane, methanol, and other useful hydrocarbon products. To elucidate the mechanism of this reaction, we apply computational electrochemistry which is a fledgling, important area of basic science critical to energy storage. This report highlights several approaches, including the calculation of redox potentials, the explicit depiction of liquid water environments using AIMD, and free energy methods. While costly, these pioneering calculations reveal the key role of hydration- and protonation-stabilization of reaction intermediates, and may inform the design of CO{sub 2}-capture materials as well as its electrochemical reduction. In the course of this work, we have also dealt with the challenges of identifying and applying electronic structure methods which are sufficiently accurate to deal with transition metal ion complex-based catalyst. Such electronic structure methods are also pertinent to the accurate modeling of actinide materials and therefore to nuclear energy research. Our multi-pronged effort towards achieving this titular goal of the LDRD is discussed.

More Details

Tracking of Nuclear Production using Indigenous Species: Final LDRD Report

Alam, Todd M.; Alam, Mary K.

Our LDRD research project sought to develop an analytical method for detection of chemicals used in nuclear materials processing. Our approach is distinctly different than current research involving hardware-based sensors. By utilizing the response of indigenous species of plants and/or animals surrounding (or within) a nuclear processing facility, we propose tracking 'suspicious molecules' relevant to nuclear materials processing. As proof of concept, we have examined TBP, tributylphosphate, used in uranium enrichment as well as plutonium extraction from spent nuclear fuels. We will compare TBP to the TPP (triphenylphosphate) analog to determine the uniqueness of the metabonomic response. We show that there is a unique metabonomic response within our animal model to TBP. The TBP signature can further be delineated from that of TPP. We have also developed unique methods of instrumental transfer for metabonomic data sets.

More Details

Microbial agent detection using near-IR electrophoretic and spectral signatures (MADNESS) for rapid identification in detect-to-warn applications

Bambha, Ray B.; Gomez, Anthony L.; VanderNoot, Victoria A.; Renzi, Ronald F.; Krafcik, Karen L.

Rapid identification of aerosolized biological agents following an alarm by particle triggering systems is needed to enable response actions that save lives and protect assets. Rapid identifiers must achieve species level specificity, as this is required to distinguish disease-causing organisms (e.g., Bacillus anthracis) from benign neighbors (e.g., Bacillus subtilis). We have developed a rapid (1-5 minute), novel identification methodology that sorts intact organisms from each other and particulates using capillary electrophoresis (CE), and detects using near-infrared (NIR) absorbance and scattering. We have successfully demonstrated CE resolution of Bacillus spores and vegetative bacteria at the species level. To achieve sufficient sensitivity for detection needs ({approx}10{sup 4} cfu/mL for bacteria), we have developed fiber-coupled cavity-enhanced absorbance techniques. Using this method, we have demonstrated {approx}two orders of magnitude greater sensitivity than published results for absorbing dyes, and single particle (spore) detection through primarily scattering effects. Results of the integrated CE-NIR system for spore detection are presented.

More Details

Benefits from flywheel energy storage for area regulation in California - demonstration results : a study for the DOE Energy Storage Systems program

Eyer, James M.

This report documents a high-level analysis of the benefit and cost for flywheel energy storage used to provide area regulation for the electricity supply and transmission system in California. Area regulation is an 'ancillary service' needed for a reliable and stable regional electricity grid. The analysis was based on results from a demonstration, in California, of flywheel energy storage developed by Beacon Power Corporation (the system's manufacturer). Demonstrated was flywheel storage systems ability to provide 'rapid-response' regulation. Flywheel storage output can be varied much more rapidly than the output from conventional regulation sources, making flywheels more attractive than conventional regulation resources. The performance of the flywheel storage system demonstrated was generally consistent with requirements for a possible new class of regulation resources - 'rapid-response' energy-storage-based regulation - in California. In short, it was demonstrated that Beacon Power Corporation's flywheel system follows a rapidly changing control signal (the ACE, which changes every four seconds). Based on the results and on expected plant cost and performance, the Beacon Power flywheel storage system has a good chance of being a financially viable regulation resource. Results indicate a benefit/cost ratio of 1.5 to 1.8 using what may be somewhat conservative assumptions. A benefit/cost ratio of one indicates that, based on the financial assumptions used, the investment's financial returns just meet the investors target.

More Details

Evaluation of the impact chip multiprocessors have on SNL application performance

Doerfler, Douglas W.

This report describes trans-organizational efforts to investigate the impact of chip multiprocessors (CMPs) on the performance of important Sandia application codes. The impact of CMPs on the performance and applicability of Sandia's system software was also investigated. The goal of the investigation was to make algorithmic and architectural recommendations for next generation platform acquisitions.

More Details

Decision support for integrated water-energy planning

Tidwell, Vincent C.; Kobos, Peter H.; Malczynski, Leonard A.; Hart, William E.; Castillo, Cesar R.

Currently, electrical power generation uses about 140 billion gallons of water per day accounting for over 39% of all freshwater withdrawals thus competing with irrigated agriculture as the leading user of water. Coupled to this water use is the required pumping, conveyance, treatment, storage and distribution of the water which requires on average 3% of all electric power generated. While water and energy use are tightly coupled, planning and management of these fundamental resources are rarely treated in an integrated fashion. Toward this need, a decision support framework has been developed that targets the shared needs of energy and water producers, resource managers, regulators, and decision makers at the federal, state and local levels. The framework integrates analysis and optimization capabilities to identify trade-offs, and 'best' alternatives among a broad list of energy/water options and objectives. The decision support framework is formulated in a modular architecture, facilitating tailored analyses over different geographical regions and scales (e.g., national, state, county, watershed, NERC region). An interactive interface allows direct control of the model and access to real-time results displayed as charts, graphs and maps. Ultimately, this open and interactive modeling framework provides a tool for evaluating competing policy and technical options relevant to the energy-water nexus.

More Details

Nambe Pueblo Water Budget and Forecasting model

Brainard, James R.

This report documents The Nambe Pueblo Water Budget and Water Forecasting model. The model has been constructed using Powersim Studio (PS), a software package designed to investigate complex systems where flows and accumulations are central to the system. Here PS has been used as a platform for modeling various aspects of Nambe Pueblo's current and future water use. The model contains three major components, the Water Forecast Component, Irrigation Scheduling Component, and the Reservoir Model Component. In each of the components, the user can change variables to investigate the impacts of water management scenarios on future water use. The Water Forecast Component includes forecasting for industrial, commercial, and livestock use. Domestic demand is also forecasted based on user specified current population, population growth rates, and per capita water consumption. Irrigation efficiencies are quantified in the Irrigated Agriculture component using critical information concerning diversion rates, acreages, ditch dimensions and seepage rates. Results from this section are used in the Water Demand Forecast, Irrigation Scheduling, and the Reservoir Model components. The Reservoir Component contains two sections, (1) Storage and Inflow Accumulations by Categories and (2) Release, Diversion and Shortages. Results from both sections are derived from the calibrated Nambe Reservoir model where historic, pre-dam or above dam USGS stream flow data is fed into the model and releases are calculated.

More Details

Energy scavenging from environmental vibration

Galchev, Tzeno; Apblett, Christopher A.; Najafi, Khalil

The goal of this project is to develop an efficient energy scavenger for converting ambient low-frequency vibrations into electrical power. In order to achieve this a novel inertial micro power generator architecture has been developed that utilizes the bi-stable motion of a mechanical mass to convert a broad range of low-frequency (< 30Hz), and large-deflection (>250 {micro}m) ambient vibrations into high-frequency electrical output energy. The generator incorporates a bi-stable mechanical structure to initiate high-frequency mechanical oscillations in an electromagnetic scavenger. This frequency up-conversion technique enhances the electromechanical coupling and increases the generated power. This architecture is called the Parametric Frequency Increased Generator (PFIG). Three generations of the device have been fabricated. It was first demonstrated using a larger bench-top prototype that had a functional volume of 3.7cm3. It generated a peak power of 558{micro}W and an average power of 39.5{micro}W at an input acceleration of 1g applied at 10 Hz. The performance of this device has still not been matched by any other reported work. It yielded the best power density and efficiency for any scavenger operating from low-frequency (<10Hz) vibrations. A second-generation device was then fabricated. It generated a peak power of 288{micro}W and an average power of 5.8{micro}W from an input acceleration of 9.8m/s{sup 2} at 10Hz. The device operates over a frequency range of 20Hz. The internal volume of the generator is 2.1cm{sup 3} (3.7cm{sup 3} including casing), half of a standard AA battery. Lastly, a piezoelectric version of the PFIG is currently being developed. This device clearly demonstrates one of the key features of the PFIG architecture, namely that it is suitable for MEMS integration, more so than resonant generators, by incorporating a brittle bulk piezoelectric ceramic. This is the first micro-scale piezoelectric generator capable of <10Hz operation. The fabricated device currently generates a peak power of 25.9{micro}W and an average power of 1.21{micro}W from an input acceleration of 9.8m/s{sup -} at 10Hz. The device operates over a frequency range of 23Hz. The internal volume of the generator is 1.2cm{sup 3}.

More Details

Nanoconfined water in magnesium-rich phyllosilicates

Greathouse, Jeffery A.; Nenoff, T.M.; Cygan, Randall T.

Inelastic neutron scattering, density functional theory, ab initio molecular dynamics, and classical molecular dynamics were used to examine the behavior of nanoconfined water in palygorskite and sepiolite. These complementary methods provide a strong basis to illustrate and correlate the significant differences observed in the spectroscopic signatures of water in two unique clay minerals. Distortions of silicate tetrahedra in the smaller-pore palygorskite exhibit a limited number of hydrogen bonds having relatively short bond lengths. In contrast, without the distorted silicate tetrahedra, an increased number of hydrogen bonds are observed in the larger-pore sepiolite with corresponding longer bond distances. Because there is more hydrogen bonding at the pore interface in sepiolite than in palygorskite, we expect librational modes to have higher overall frequencies (i.e., more restricted rotational motions); experimental neutron scattering data clearly illustrates this shift in spectroscopic signatures. Distortions of the silicate tetrahedra in these minerals effectively disrupts hydrogen bonding patterns at the silicate-water interface, and this has a greater impact on the dynamical behavior of nanoconfined water than the actual size of the pore or the presence of coordinatively-unsaturated magnesium edge sites.

More Details

Equation of state and transport property measurements of warm dense matter

Knudson, Marcus D.; Desjarlais, Michael P.

Location of the liquid-vapor critical point (c.p.) is one of the key features of equation of state models used in simulating high energy density physics and pulsed power experiments. For example, material behavior in the location of the vapor dome is critical in determining how and when coronal plasmas form in expanding wires. Transport properties, such as conductivity and opacity, can vary an order of magnitude depending on whether the state of the material is inside or outside of the vapor dome. Due to the difficulty in experimentally producing states near the vapor dome, for all but a few materials, such as Cesium and Mercury, the uncertainty in the location of the c.p. is of order 100%. These states of interest can be produced on Z through high-velocity shock and release experiments. For example, it is estimated that release adiabats from {approx}1000 GPa in aluminum would skirt the vapor dome allowing estimates of the c.p. to be made. This is within the reach of Z experiments (flyer plate velocity of {approx}30 km/s). Recent high-fidelity EOS models and hydrocode simulations suggest that the dynamic two-phase flow behavior observed in initial scoping experiments can be reproduced, providing a link between theory and experiment. Experimental identification of the c.p. in aluminum would represent the first measurement of its kind in a dynamic experiment. Furthermore, once the c.p. has been experimentally determined it should be possible to probe the electrical conductivity, opacity, reflectivity, etc. of the material near the vapor dome, using a variety of diagnostics. We propose a combined experimental and theoretical investigation with the initial emphasis on aluminum.

More Details

Increasing fault resiliency in a message-passing environment

Ferreira, Kurt; Oldfield, Ron A.; Stearley, Jon S.; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Petaflops systems will have tens to hundreds of thousands of compute nodes which increases the likelihood of faults. Applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 30,000 nodes will likely spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost. We created a library that performs redundant computations on additional nodes allocated to the application. An active node and its redundant partner form a node bundle which will only fail, and cause an application restart, when both nodes in the bundle fail. The goal of this library is to learn whether this can be done entirely at the user level, what requirements this library places on a Reliability, Availability, and Serviceability (RAS) system, and what its impact on performance and run time is. We find that our redundant MPI layer library imposes a relatively modest performance penalty for applications, but that it greatly reduces the number of applications interrupts. This reduction in interrupts leads to huge savings in restart and rework time. For large-scale applications the savings compensate for the performance loss and the additional nodes required for redundant computations.

More Details

Simulation of ion beam induced current in radiation detectors and microelectronic devices

Vizkelethy, Gyorgy V.

Ionizing radiation is known to cause Single Event Effects (SEE) in a variety of electronic devices. The mechanism that leads to these SEEs is current induced by the radiation in these devices. While this phenomenon is detrimental in ICs, this is the basic mechanism behind the operation of semiconductor radiation detectors. To be able to predict SEEs in ICs and detector responses we need to be able to simulate the radiation induced current as the function of time. There are analytical models, which work for very simple detector configurations, but fail for anything more complex. On the other end, TCAD programs can simulate this process in microelectronic devices, but these TCAD codes costs hundreds of thousands of dollars and they require huge computing resources. In addition, in certain cases they fail to predict the correct behavior. A simulation model based on the Gunn theorem was developed and used with the COMSOL Multiphysics framework.

More Details

Graph algorithms in the titan toolkit

McLendon, William C.

Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.

More Details

Integrated safeguards & security for material protection, accounting, and control

Cipiti, Benjamin B.; Duran, Felicia A.

Traditional safeguards and security design for fuel cycle facilities is done separately and after the facility design is near completion. This can result in higher costs due to retrofits and redundant use of data. Future facilities will incorporate safeguards and security early in the design process and integrate the systems to make better use of plant data and strengthen both systems. The purpose of this project was to evaluate the integration of materials control and accounting (MC&A) measurements with physical security design for a nuclear reprocessing plant. Locations throughout the plant where data overlap occurs or where MC&A data could be a benefit were identified. This mapping is presented along with the methodology for including the additional data in existing probabilistic assessments to evaluate safeguards and security systems designs.

More Details

Calibration of an interfacial force microscope for MEMS metrology : FY08-09 activities

Mitchell, John A.; Baker, Michael S.

Progress in MEMS fabrication has enabled a wide variety of force and displacement sensing devices to be constructed. One device under intense development at Sandia is a passive shock switch, described elsewhere (Mitchell 2008). A goal of all MEMS devices, including the shock switch, is to achieve a high degree of reliability. This, in turn, requires systematic methods for validating device performance during each iteration of design. Once a design is finalized, suitable tools are needed to provide quality assurance for manufactured devices. To ensure device performance, measurements on these devices must be traceable to NIST standards. In addition, accurate metrology of MEMS components is needed to validate mechanical models that are used to design devices to accelerate development and meet emerging needs. Progress towards a NIST-traceable calibration method is described for a next-generation, 2D Interfacial Force Microscope (IFM) for applications in MEMS metrology and qualification. Discussed are the results of screening several suitable calibration methods and the known sources of uncertainty in each method.

More Details

Final report LDRD project 105816 : model reduction of large dynamic systems with localized nonlinearities

Lehoucq, Richard B.; Dohrmann, Clark R.; Segalman, Daniel J.

Advanced computing hardware and software written to exploit massively parallel architectures greatly facilitate the computation of extremely large problems. On the other hand, these tools, though enabling higher fidelity models, have often resulted in much longer run-times and turn-around-times in providing answers to engineering problems. The impediments include smaller elements and consequently smaller time steps, much larger systems of equations to solve, and the inclusion of nonlinearities that had been ignored in days when lower fidelity models were the norm. The research effort reported focuses on the accelerating the analysis process for structural dynamics though combinations of model reduction and mitigation of some factors that lead to over-meshing.

More Details

Thermal Microphotonic Focal Plane Array (TM-FPA)

Shaw, Michael S.; Lentine, Anthony L.; Nielson, Gregory N.; Wright, Jeremy B.; Peters, D.W.; Zortman, William A.; McCormick, Frederick B.

The advent of high quality factor (Q) microphotonic-resonators has led to the demonstration of high-fidelity optical sensors of many physical phenomena (e.g. mechanical, chemical, and biological sensing) often with far better sensitivity than traditional techniques. Microphotonic-resonators also offer potential advantages as uncooled thermal detectors including significantly better noise performance, smaller pixel size, and faster response times than current thermal detectors. In particular, microphotonic thermal detectors do not suffer from Johnson noise in the sensor, offer far greater responsivity, and greater thermal isolation as they do not require metallic leads to the sensing element. Such advantages make the prospect of a microphotonic thermal imager highly attractive. Here, we introduce the microphotonic thermal detection technique, present the theoretical basis for the approach, discuss our progress on the development of this technology and consider future directions for thermal microphotonic imaging. Already we have demonstrated viability of device fabrication with the successful demonstration of a 20{micro}m pixel, and a scalable readout technique. Further, to date, we have achieved internal noise performance (NEP{sub Internal} < 1pW/{radical}Hz) in a 20{micro}m pixel thereby exceeding the noise performance of the best microbolometers while simultaneously demonstrating a thermal time constant ({tau} = 2ms) that is five times faster. In all, this results in an internal detectivity of D*{sub internal} = 2 x 10{sup 9}cm {center_dot} {radical}Hz/W, while roughly a factor of four better than the best uncooled commercial microbolometers, future demonstrations should enable another order of magnitude in sensitivity. While much work remains to achieve the level of maturity required for a deployable technology, already, microphotonic thermal detection has demonstrated considerable potential.

More Details

Climate uncertainty and implications for U.S. state-level risk assessment through 2050

Backus, George A.

Decisions for climate policy will need to take place in advance of climate science resolving all relevant uncertainties. Further, if the concern of policy is to reduce risk, then the best-estimate of climate change impacts may not be so important as the currently understood uncertainty associated with realizable conditions having high consequence. This study focuses on one of the most uncertain aspects of future climate change - precipitation - to understand the implications of uncertainty on risk and the near-term justification for interventions to mitigate the course of climate change. We show that the mean risk of damage to the economy from climate change, at the national level, is on the order of one trillion dollars over the next 40 years, with employment impacts of nearly 7 million labor-years. At a 1% exceedance-probability, the impact is over twice the mean-risk value. Impacts at the level of individual U.S. states are then typically in the multiple tens of billions dollar range with employment losses exceeding hundreds of thousands of labor-years. We used results of the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) climate-model ensemble as the referent for climate uncertainty over the next 40 years, mapped the simulated weather hydrologically to the county level for determining the physical consequence to economic activity at the state level, and then performed a detailed, seventy-industry, analysis of economic impact among the interacting lower-48 states. We determined industry GDP and employment impacts at the state level, as well as interstate population migration, effect on personal income, and the consequences for the U.S. trade balance.

More Details

Parallel digital forensics infrastructure

Duggan, David P.

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

More Details

Communication with U.S. federal decision makers : a primer with notes on the use of computer models as a means of communication

Webb, Erik K.; Tidwell, Vincent C.

This document outlines ways to more effectively communicate with U.S. Federal decision makers by outlining the structure, authority, and motivations of various Federal groups, how to find the trusted advisors, and how to structure communication. All three branches of Federal governments have decision makers engaged in resolving major policy issues. The Legislative Branch (Congress) negotiates the authority and the resources that can be used by the Executive Branch. The Executive Branch has some latitude in implementation and prioritizing resources. The Judicial Branch resolves disputes. The goal of all decision makers is to choose and implement the option that best fits the needs and wants of the community. However, understanding the risk of technical, political and/or financial infeasibility and possible unintended consequences is extremely difficult. Primarily, decision makers are supported in their deliberations by trusted advisors who engage in the analysis of options as well as the day-to-day tasks associated with multi-party negotiations. In the best case, the trusted advisors use many sources of information to inform the process including the opinion of experts and if possible predictive analysis from which they can evaluate the projected consequences of their decisions. The paper covers the following: (1) Understanding Executive and Legislative decision makers - What can these decision makers do? (2) Finding the target audience - Who are the internal and external trusted advisors? (3) Packaging the message - How do we parse and integrate information, and how do we use computer simulation or models in policy communication?

More Details

Micro-Kelvin cold molecules

Chandler, D.W.; Strecker, Kevin S.

We have developed a novel experimental technique for direct production of cold molecules using a combination of techniques from atomic optical and molecular physics and physical chemistry. The ability to produce samples of cold molecules has application in a broad spectrum of technical fields high-resolution spectroscopy, remote sensing, quantum computing, materials simulation, and understanding fundamental chemical dynamics. Researchers around the world are currently exploring many techniques for producing samples of cold molecules, but to-date these attempts have offered only limited success achieving milli-Kelvin temperatures with low densities. This Laboratory Directed Research and Development project is to develops a new experimental technique for producing micro-Kelvin temperature molecules via collisions with laser cooled samples of trapped atoms. The technique relies on near mass degenerate collisions between the molecule of interest and a laser cooled (micro-Kelvin) atom. A subset of collisions will transfer all (nearly all) of the kinetic energy from the 'hot' molecule, cooling the molecule at the expense of heating the atom. Further collisions with the remaining laser cooled atoms will thermally equilibrate the molecules to the micro-Kelvin temperature of the laser-cooled atoms.

More Details

Quantitative study of rectangular waveguide behavior in the THz

Wanke, Michael W.; Rowen, Adam M.; Nordquist, Christopher N.

This report describes our efforts to quantify the behavior of micro-fabricated THz rectangular waveguides on a configurable, robust semiconductor-based platform. These waveguides are an enabling technology for coupling THz radiation directly from or to lasers, mixers, detectors, antennas, and other devices. Traditional waveguides fabricated on semiconductor platforms such as dielectric guides in the infrared or co-planar waveguides in the microwave regions, suffer high absorption and radiative losses in the THz. The former leads to very short propagation lengths, while the latter will lead to unwanted radiation modes and/or crosstalk in integrated devices. This project exploited the initial developments of THz micro-machined rectangular waveguides developed under the THz Grand Challenge Program, but instead of focusing on THz transceiver integration, this project focused on exploring the propagation loss and far-field radiation patterns of the waveguides. During the 9 month duration of this project we were able to reproduce the waveguide loss per unit of length in the waveguides and started to explore how the loss depended on wavelength. We also explored the far-field beam patterns emitted by H-plane horn antennas attached to the waveguides. In the process we learned that the method of measuring the beam patterns has a significant impact on what is actually measured, and this may have an effect on most of the beam patterns of THz that have been reported to date. The beam pattern measurements improved significantly throughout the project, but more refinements of the measurement are required before a definitive determination of the beam-pattern can be made.

More Details

Cambio : a file format translation and analysis application for the nuclear response emergency community

Lasche, George L.

Cambio is an application intended to automatically read and display any spectrum file of any format in the world that the nuclear emergency response community might encounter. Cambio also provides an analysis capability suitable for HPGe spectra when detector response and scattering environment are not well known. Why is Cambio needed: (1) Cambio solves the following problem - With over 50 types of formats from instruments used in the field and new format variations appearing frequently, it is impractical for every responder to have current versions of the manufacturer's software from every instrument used in the field; (2) Cambio converts field spectra to any one of several common formats that are used for analysis, saving valuable time in an emergency situation; (3) Cambio provides basic tools for comparing spectra, calibrating spectra, and isotope identification with analysis suited especially for HPGe spectra; and (4) Cambio has a batch processing capability to automatically translate a large number of archival spectral files of any format to one of several common formats, such as the IAEA SPE or the DHS N42. Currently over 540 analysts and members of the nuclear emergency response community worldwide are on the distribution list for updates to Cambio. Cambio users come from all levels of government, university, and commercial partners around the world that support efforts to counter terrorist nuclear activities. Cambio is Unclassified Unlimited Release (UUR) and distributed by internet downloads with email notifications whenever a new build of Cambio provides for new formats, bug fixes, or new or improved capabilities. Cambio is also provided as a DLL to the Karlsruhe Institute for Transuranium Elements so that Cambio's automatic file-reading capability can be included at the Nucleonica web site.

More Details

Dynamic crack initiation toughness : experiments and peridynamic modeling

Foster, John T.

This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model is then validated against one class of problems showing good agreement with experimental results.

More Details

Presto 4.14 users guide

Spencer, Benjamin S.

Presto is a three-dimensional transient dynamics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. The Presto 4.14 User's Guide provides information about the functionality in Presto and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Presto is similar to that of the code Adagio [3]. Adagio is a three-dimensional quasi-static code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. Adagio, like Presto, is built on the SIERRA Framework [1]. Contact capabilities for Adagio are also parallel and scalable. A significant feature of Adagio is that it offers a multilevel, nonlinear iterative solver. Because of the similarities in input and usage between Presto and Adagio, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Adagio may be found in the Presto user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.

More Details

Adagio 4.14 users guide

Spencer, Benjamin S.

This document is a user's guide for the code Adagio. Adagio is a three-dimensional, implicit solid mechanics code with a versatile element library, nonlinear material models, and capabilities for modeling large deformation and contact. Adagio is a parallel code, and its nonlinear solver and contact capabilities enable scalable solutions of large problems. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. The Adagio 4.14 User's Guide provides information about the functionality in Adagio and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Adagio is similar to that of the code Presto [3]. Presto, like Adagio, is a solid mechanics code built on the SIERRA Framework. The primary difference between the two codes is that Presto uses explicit time integration for transient dynamics analysis, whereas Adagio is an implicit code. Because of the similarities in input and usage between Adagio and Presto, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Presto may be found in the Adagio user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.

More Details

Systems engineering management plans

Rodriguez, Tamara S.

The Systems Engineering Management Plan (SEMP) is a comprehensive and effective tool used to assist in the management of systems engineering efforts. It is intended to guide the work of all those involved in the project. The SEMP is comprised of three main sections: technical project planning and control, systems engineering process, and engineering specialty integration. The contents of each section must be tailored to the specific effort. A model outline and example SEMP are provided. The target audience is those who are familiar with the systems engineering approach and who have an interest in employing the SEMP as a tool for systems management. The goal of this document is to provide the reader with an appreciation for the use and importance of the SEMP, as well as provide a framework that can be used to create the management plan.

More Details

Plume rise calculations using a control volume approach and the damped spring oscillator analogy

2008 Proceedings of the ASME Summer Heat Transfer Conference, HT 2008

Brown, Alexander L.; Bixler, Nathan E.

The PUFF code was originally written and designed to calculate the rise of a large detonation or deflagration non-continuous plume (puff) in the atmosphere. It is based on a buoyant spherical control volume approximation. The theory for the model is updated and presented. The model has been observed to result in what are believed to be unrealistic plume elevation oscillations as the plume approaches the terminal elevation. Recognizing a similarity between the equations for a classical damped spring oscillator and the present model, the plume rise model can be analyzed by evaluating equivalent spring constants and damping functions. Such an analysis suggests a buoyant plume in the atmosphere is significantly under-damped, explaining the occurrence of the oscillations in the model. Based on lessons learned from the analogy evaluations and guided by comparisons with early plume rise data, a set of assumptions is proposed to address the excessive oscillations found in the predicted plume near the terminal elevation, and to improve the robustness of the predictions. This is done while retaining the basic context of the present model formulation. The propriety of the present formulation is evaluated. The revised model fits the vast majority of the existing data to +/- 25%, which is considered reasonable given the present model form. Further validation efforts would be advisable, but are impeded by a lack of quality existing datasets. Copyright © 2008 by ASME.

More Details

Performance of a parallel algebraic multilevel preconditioner for stabilized finite element semiconductor device modeling

Journal of Computational Physics

Lin, Paul T.; Shadid, John N.; Sala, Marzio; Tuminaro, Raymond S.; Hennigan, Gary L.; Hoekstra, Robert J.

In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system is obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 108 unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system. © 2009 Elsevier Inc. All rights reserved.

More Details

Optical requirements with turbulence correction for long-range biometrics

Proceedings of SPIE - The International Society for Optical Engineering

Soehnel, Grant H.; Bagwell, Brett B.; Dixon, Kevin R.; Wick, David V.

Iris recognition utilizes distinct patterns found in the human iris to perform identification. Image acquisition is a critical first step towards successful operation of iris recognition systems. However, the quality of iris images required by standard iris recognition algorithms puts hard constraints on the imaging optical systems which have resulted in demonstrated systems to date requiring a relatively short subject stand-off distance. In this paper, we study long-range iris recognition at distances as large as 200 meters, and determine conditions the imaging system must satisfy for identification at longer stand-off distances. © 2009 SPIE.

More Details

Simulating the effects of long-range collection on synthetic aperture radar imagery

Proceedings of SPIE - The International Society for Optical Engineering

Richards, John R.

Synthetic aperture radar (SAR) images exhibit a fundamental inverse relationship between image quality and collection range: various metrics and visual inspection clearly indicate that SAR image quality deteriorates as collection range increases. Standoff constraints typically dictate long-range imaging geometries for operational use of fielded SAR sensors. At the same time, system validation and data volume considerations typically dictate short-range imaging geometries for non-operational SAR data collections. This presents a conundrum for the developers of SAR exploitation applications: despite the fact that a sensor may be used exclusively at long ranges in operational settings, most or all of the data available for application development and testing may have been collected at short range. The lack of long-range imagery for development and testing can lead to a variety of problems, potentially including not only poor robustness to range-induced image-quality degradation, but even total failure if longer-range imagery invalidates fundamental algorithmic assumptions. We propose a method for simulating the effects of longer-range collection using shorter-range SAR images. This method incorporates the predominant contributing factors to range-induced image-quality degradation, including various signal-attenuation and aperture-decoherence effects. We present examples demonstrating our approach. © 2009 SPIE.

More Details

Model building techniques for analysis

Brooks, Sean B.; Córdova, Theresa E.; Henry, Ronald C.; Martin, Wilbur D.; McDaniel, Karen L.; Walther, Howard P.

The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

More Details

Wind energy Computerized Maintenance Management System (CMMS) : data collection recommendations for reliability analysis

Hines, Valerie A.

This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a list of the data needed to support reliability and availability analysis, and gives specific recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of fielded wind turbines. This report is intended to help the reader develop a basic understanding of what data are needed from a Computerized Maintenance Management System (CMMS) and other data systems, for reliability analysis. The report provides: (1) a list of the data needed to support reliability and availability analysis; and (2) specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and a wider variety of analysis and reporting needs.

More Details

A comparison of Lagrangian/Eulerian approaches for tracking the kinematics of high deformation solid motion

Ames, Thomas L.; Robinson, Allen C.

The modeling of solids is most naturally placed within a Lagrangian framework because it requires constitutive models which depend on knowledge of the original material orientations and subsequent deformations. Detailed kinematic information is needed to ensure material frame indifference which is captured through the deformation gradient F. Such information can be tracked easily in a Lagrangian code. Unfortunately, not all problems can be easily modeled using Lagrangian concepts due to severe distortions in the underlying motion. Either a Lagrangian/Eulerian or a pure Eulerian modeling framework must be introduced. We discuss and contrast several Lagrangian/Eulerian approaches for keeping track of the details of material kinematics.

More Details

Investigating methods of supporting dynamically linked executables on high performance computing platforms

Laros, James H.; Kelly, Suzanne M.; Levenhagen, Michael J.; Pedretti, Kevin T.T.

Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

More Details

Final report : impacts analysis for cyber attack on electric power systems (national SCADA test bed FY09)

Stamp, Jason E.; Laviolette, Randall A.

The development continues for Finite State Abstraction (FSA) methods to enable Impacts Analysis (IA) for cyber attack against power grid control systems. Building upon previous work, we successfully demonstrated the addition of Bounded Model Checking (BMC) to the FSA method, which constrains grid conditions to reasonable behavior. The new FSA feature was successfully implemented and tested. FSA is an important part of IA for the power grid, complementing steady-state approaches. It enables the simultaneous evaluation of myriad dynamic trajectories for the system, which in turn facilitates IA for whole ranges of system conditions simultaneously. Given the potentially wide range and subtle nature of potential control system attacks, this is a promising research approach. In this report, we will explain the addition of BMC to the previous FSA work and some testing/simulation upon the implemented code using a two-bus test system. The current FSA approach and code allow the calculation of the acceptability of power grid conditions post-cyber attack (over a given time horizon and for a specific grid topology). Future work will enable analysis spanning various topologies (to account for switching events), as well as an understanding of the cyber attack stimuli that can lead to undesirable grid conditions.

More Details

Improving performance via mini-applications

Doerfler, Douglas W.; Crozier, Paul C.; Edwards, Harold C.; Williams, Alan B.; Rajan, Mahesh R.; Keiter, Eric R.; Thornquist, Heidi K.

Application performance is determined by a combination of many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, we find that the use of mini-applications - small self-contained proxies for real applications - is an excellent approach for rapidly exploring the parameter space of all these choices. Furthermore, use of mini-applications enriches the interaction between application, library and computer system developers by providing explicit functioning software and concrete performance results that lead to detailed, focused discussions of design trade-offs, algorithm choices and runtime performance issues. In this paper we discuss a collection of mini-applications and demonstrate how we use them to analyze and improve application performance on new and future computer platforms.

More Details

Computational investigation of thermal gas separation for CO2 capture

Torczynski, J.R.; Gallis, Michail A.; Brooks, Carlton F.; Brady, Patrick V.; Bryan, Charles R.

This report summarizes the work completed under the Laboratory Directed Research and Development (LDRD) project 09-1351, 'Computational Investigation of Thermal Gas Separation for CO{sub 2} Capture'. Thermal gas separation for a binary mixture of carbon dioxide and nitrogen is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Molecular models for nitrogen and carbon dioxide are developed, implemented, compared to theoretical results, and compared to several experimental thermophysical properties. The molecular models include three translational modes, two fully excited rotational modes, and vibrational modes, whose degree of excitation depends on the temperature. Nitrogen has one vibrational mode, and carbon dioxide has four vibrational modes (two of which are degenerate). These models are used to perform a parameter study for mixtures of carbon dioxide and nitrogen confined between parallel walls over realistic ranges of gas temperatures and nominal concentrations of carbon dioxide. The degree of thermal separation predicted by DSMC is slightly higher than experimental values and is sensitive to the details of the molecular models.

More Details

MEMS reliability: Where are we now?

Microelectronics Reliability

Tanner, Danelle M.

This paper reviews the significant successes in MEMS products from a reliability perspective. MEMS reliability is challenging and can be device and process dependent, but exercising the proper reliability techniques very early in product development has yielded success for many manufacturers. The reliability concerns of various devices are discussed including ink jet printhead, inertial sensors, pressure sensors, micro-mirror arrays, and the emerging applications of RF switches and resonators. Metal contacting RF switches are susceptible to hydrocarbon contamination which can increase the contact resistance over cycle count. Packaging techniques are described in the context of the whole reliability program. © 2009 Elsevier Ltd.

More Details

Science at the interface : grain boundaries in nanocrystalline metals

Foiles, Stephen M.; Medlin, Douglas L.; Holm, Elizabeth A.; Brewer, Luke N.; Hattar, Khalid M.; Knapp, J.A.; Rodriguez, M.A.

Interfaces are a critical determinant of the full range of materials properties, especially at the nanoscale. Computational and experimental methods developed a comprehensive understanding of nanograin evolution based on a fundamental understanding of internal interfaces in nanocrystalline nickel. It has recently been shown that nanocrystals with a bi-modal grain-size distribution possess a unique combination of high-strength, ductility and wear-resistance. We performed a combined experimental and theoretical investigation of the structure and motion of internal interfaces in nanograined metal and the resulting grain evolution. The properties of grain boundaries are computed for an unprecedented range of boundaries. The presence of roughening transitions in grain boundaries is explored and related to dramatic changes in boundary mobility. Experimental observations show that abnormal grain growth in nanograined materials is unlike conventional scale material in both the level of defects and the formation of unfavored phases. Molecular dynamics simulations address the origins of some of these phenomena.

More Details

Testing military grade magnetics (transformers, inductors and coils)

Vrabel, Paul E.

Engineers and designers are constantly searching for test methods to qualify or 'prove-in' new designs. In the High Reliability world of military parts, design test, qualification tests, in process tests and product characteristic tests, become even more important. The use of in process and function tests has been adopted as a way of demonstrating that parts will operate correctly and survive its 'use' environments. This paper discusses various types of tests to qualify the magnetic components - the current carrying capability of coils, a next assembly 'as used' test, a corona test and inductance at temperature test. Each of these tests addresses a different potential failure on a component. The entire process from design to implementation is described.

More Details

Nanostructures from hydrogen implantation of metals

Ong, Markus D.; Yang, Nancy Y.; DePuit, Ryan D.; McWatters, Bruce R.; Causey, Rion A.

This study investigates a pathway to nanoporous structures created by hydrogen implantation in aluminum. Previous experiments for fusion applications have indicated that hydrogen and helium ion implantations are capable of producing bicontinuous nanoporous structures in a variety of metals. This study focuses specifically on hydrogen and helium implantations of aluminum, including complementary experimental results and computational modeling of this system. Experimental results show the evolution of the surface morphology as the hydrogen ion fluence increases from 10{sup 17} cm{sup -2} to 10{sup 18} cm{sup -2}. Implantations of helium at a fluence of 10{sup 18} cm{sup -2} produce porosity on the order of 10 nm. Computational modeling demonstrates the formation of alanes, their desorption, and the resulting etching of aluminum surfaces that likely drives the nanostructures that form in the presence of hydrogen.

More Details

Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results

Eldred, Michael S.; Swiler, Laura P.

This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

More Details

RF/microwave properties of nanotubes and nanowires : LDRD Project 105876 final report

Lee, Mark L.; Highstrete, Clark H.; Hsu, Julia W.; Scrymgeour, David S.

LDRD Project 105876 was a research project whose primary goal was to discover the currently unknown science underlying the basic linear and nonlinear electrodynamic response of nanotubes and nanowires in a manner that will support future efforts aimed at converting forefront nanoscience into innovative new high-frequency nanodevices. The project involved experimental and theoretical efforts to discover and understand high frequency (MHz through tens of GHz) electrodynamic response properties of nanomaterials, emphasizing nanowires of silicon, zinc oxide, and carbon nanotubes. While there is much research on DC electrical properties of nanowires, electrodynamic characteristics still represent a major new frontier in nanotechnology. We generated world-leading insight into how the low dimensionality of these nanomaterials yields sometimes desirable and sometimes problematic high-frequency properties that are outside standard model electron dynamics. In the cases of silicon nanowires and carbon nanotubes, evidence of strong disorder or glass-like charge dynamics was measured, indicating that these materials still suffer from serious inhomogeneities that limit there high frequency performance. Zinc oxide nanowires were found to obey conventional Drude dynamics. In all cases, a significant practical problem involving large impedance mismatch between the high intrinsic impedance of all nanowires and nanotubes and high-frequency test equipment had to be overcome.

More Details

THz transceiver characterization : LDRD project 139363 final report

Lee, Mark L.; Wanke, Michael W.; Nordquist, Christopher N.; Cich, Michael C.; Wendt, J.R.; Fuller, Charles T.; Reno, J.L.

LDRD Project 139363 supported experiments to quantify the performance characteristics of monolithically integrated Schottky diode + quantum cascade laser (QCL) heterodyne mixers at terahertz (THz) frequencies. These integrated mixers are the first all-semiconductor THz devices to successfully incorporate a rectifying diode directly into the optical waveguide of a QCL, obviating the conventional optical coupling between a THz local oscillator and rectifier in a heterodyne mixer system. This integrated mixer was shown to function as a true heterodyne receiver of an externally received THz signal, a breakthrough which may lead to more widespread acceptance of this new THz technology paradigm. In addition, questions about QCL mode shifting in response to temperature, bias, and external feedback, and to what extent internal frequency locking can improve stability have been answered under this project.

More Details

Overview of geologic storage of natural gas with an emphasis on assessing the feasibility of storing hydrogen

Lord, Anna S.

In many regions across the nation geologic formations are currently being used to store natural gas underground. Storage options are dictated by the regional geology and the operational need. The U.S. Department of Energy (DOE) has an interest in understanding theses various geologic storage options, the advantages and disadvantages, in the hopes of developing an underground facility for the storage of hydrogen as a low cost storage option, as part of the hydrogen delivery infrastructure. Currently, depleted gas/oil reservoirs, aquifers, and salt caverns are the three main types of underground natural gas storage in use today. The other storage options available currently and in the near future, such as abandoned coal mines, lined hard rock caverns, and refrigerated mined caverns, will become more popular as the demand for natural gas storage grows, especially in regions were depleted reservoirs, aquifers, and salt deposits are not available. The storage of hydrogen within the same type of facilities, currently used for natural gas, may add new operational challenges to the existing cavern storage industry, such as the loss of hydrogen through chemical reactions and the occurrence of hydrogen embrittlement. Currently there are only three locations worldwide, two of which are in the United States, which store hydrogen. All three sites store hydrogen within salt caverns.

More Details

Application of advanced laser diagnostics to hypersonic wind tunnels and combustion systems

Hsu, Andrea H.; Frank, Jonathan H.

This LDRD was a Sandia Fellowship that supported Andrea Hsu's PhD research at Texas A&M University and her work as a visitor at Sandia's Combustion Research Facility. The research project at Texas A&M University is concerned with the experimental characterization of hypersonic (Mach>5) flowfields using experimental diagnostics. This effort is part of a Multidisciplinary University Research Initiative (MURI) and is a collaboration between the Chemistry and Aerospace Engineering departments. Hypersonic flight conditions often lead to a non-thermochemical equilibrium (NTE) state of air, where the timescale of reaching a single (equilibrium) Boltzmann temperature is much longer than the timescale of the flow. Certain molecular modes, such as vibrational modes, may be much more excited than the translational or rotational modes of the molecule, leading to thermal-nonequilibrium. A nontrivial amount of energy is therefore contained within the vibrational mode, and this energy cascades into the flow as thermal energy, affecting flow properties through vibrational-vibrational (V-V) and vibrational-translational (V-T) energy exchanges between the flow species. The research is a fundamental experimental study of these NTE systems and involves the application of advanced laser and optical diagnostics towards hypersonic flowfields. The research is broken down into two main categories: the application and adaptation of existing laser and optical techniques towards characterization of NTE, and the development of new molecular tagging velocimetry techniques which have been demonstrated in an underexpanded jet flowfield, but may be extended towards a variety of flowfields. In addition, Andrea's work at Sandia National Labs involved the application of advanced laser diagnostics to flames and turbulent non-reacting jets. These studies included quench-free planar laser-induced fluorescence measurements of nitric oxide (NO) and mixture fraction measurements via Rayleigh scattering.

More Details

Quantitative resilience analysis through control design

Vugrin, Eric D.; Camphouse, Russell C.; Sunderland, Daniel S.

Critical infrastructure resilience has become a national priority for the U. S. Department of Homeland Security. System resilience has been studied for several decades in many different disciplines, but no standards or unifying methods exist for critical infrastructure resilience analysis. Few quantitative resilience methods exist, and those existing approaches tend to be rather simplistic and, hence, not capable of sufficiently assessing all aspects of critical infrastructure resilience. This report documents the results of a late-start Laboratory Directed Research and Development (LDRD) project that investigated the development of quantitative resilience through application of control design methods. Specifically, we conducted a survey of infrastructure models to assess what types of control design might be applicable for critical infrastructure resilience assessment. As a result of this survey, we developed a decision process that directs the resilience analyst to the control method that is most likely applicable to the system under consideration. Furthermore, we developed optimal control strategies for two sets of representative infrastructure systems to demonstrate how control methods could be used to assess the resilience of the systems to catastrophic disruptions. We present recommendations for future work to continue the development of quantitative resilience analysis methods.

More Details

Plasmonic devices and sensors built from ordered nanoporous materials

Allendorf, Mark D.; Houk, Ronald H.; Jacobs, Benjamin J.; El Gabaly Marquez, Farid E.

The objective of this project is to lay the foundation for using ordered nanoporous materials known as metal-organic frameworks (MOFs) to create devices and sensors whose properties are determined by the dimensions of the MOF lattice. Our hypothesis is that because of the very short (tens of angstroms) distances between pores within the unit cell of these materials, enhanced electro-optical properties will be obtained when the nanopores are infiltrated to create nanoclusters of metals and other materials. Synthetic methods used to produce metal nanoparticles in disordered templates or in solution typically lead to a distribution of particle sizes. In addition, creation of the smallest clusters, with sizes of a few to tens of atoms, remains very challenging. Nanoporous metal-organic frameworks (MOFs) are a promising solution to these problems, since their long-range crystalline order creates completely uniform pore sizes with potential for both steric and chemical stabilization. We report results of synthetic efforts. First, we describe a systematic investigation of silver nanocluster formation within MOFs using three representative MOF templates. The as-synthesized clusters are spectroscopically consistent with dimensions {le} 1 nm, with a significant fraction existing as Ag{sub 3} clusters, as shown by electron paramagnetic resonance. Importantly, we show conclusively that very rapid TEM-induced MOF degradation leads to agglomeration and stable, easily imaged particles, explaining prior reports of particles larger than MOF pores. These results solve an important riddle concerning MOF-based templates and suggest that heterostructures composed of highly uniform arrays of nanoparticles within MOFs are feasible. Second, a preliminary study of methods to incorporate fulleride (K{sub 3}C{sub 60}) guest molecules within MOF pores that will impart electrical conductivity is described.

More Details

Optimized nanoporous materials

Robinson, David R.; Jacobs, Benjamin J.; Ong, Markus D.; Tran, Kim T.; Langham, Mary E.; Ha, Cindy M.

Nanoporous materials have maximum practical surface areas for electrical charge storage; every point in an electrode is within a few atoms of an interface at which charge can be stored. Metal-electrolyte interfaces make best use of surface area in porous materials. However, ion transport through long, narrow pores is slow. We seek to understand and optimize the tradeoff between capacity and transport. Modeling and measurements of nanoporous gold electrodes has allowed us to determine design principles, including the fact that these materials can deplete salt from the electrolyte, increasing resistance. We have developed fabrication techniques to demonstrate architectures inspired by these principles that may overcome identified obstacles. A key concept is that electrodes should be as close together as possible; this is likely to involve an interpenetrating pore structure. However, this may prove extremely challenging to fabricate at the finest scales; a hierarchically porous structure can be a worthy compromise.

More Details

Material compatibility and thermal aging of thermoelectric materials

Morales, Alfredo M.; Chames, Jeffery M.; Cliff, Miles; Gardea, Andrew D.; Whalen, Scott A.

In order to design a thermoelectric (TE) module suitable for long-term elevated temperature use, the Department 8651 has conducted parametric experiments to study material compatibility and thermal aging of TE materials. In addition, a comprehensive material characterization has been preformed to examine thermal stability of P- and N-based alloys and their interaction with interconnect diffusion barrier(s) and solder. At present, we have completed the 7-days aging experiments for 36 tiles, from ambient to 250 C. The thermal behavior of P- and N-based alloys and their thermal interaction with both Ni and Co diffusion barriers and Au-Sn solder were examined. The preliminary results show the microstructure, texture, alloy composition, and hardness of P-(Bi,Sb){sub 2}Te{sub 3} and N-Bi{sub 2}(Te,Se){sub 3} alloys are thermally stable up to 7 days annealing at 250 C. However, metallurgical reactions between the Ni-phosphor barriers and P-type base alloy were evident at temperatures {ge} 175 C. At 250 C, the depth (or distance) of the metallurgical reaction and/or Ni diffusion into P-(Bi,Sb){sub 2}Te{sub 3} is approximately 10-15 {micro}m. This thermal instability makes the Ni-phosphor barrier unsuitable for use at temperatures {ge} 175 C. The Co barrier appeared to be thermally stable and compatible with P(Bi,Sb){sub 2}Te{sub 3} at all annealing temperatures, with the exception of a minor Co diffusion into Au-Sn solder at {ge} 175 C. The effects of Co diffusion on long-term system reliability and/or the thermal stability of the Co barrier are yet to be determined. Te evaporation and its subsequent reaction with Au-Sn solder and Ni and Co barriers on the ends of the tiles at temperatures {ge} 175 C were evident. The Te loss and its effect on the long-term required stoichiometry of P-(Bi, Sb){sub 2}Te{sub 3} are yet to be understood. The aging experiments of 90 days and 180 days are ongoing and scheduled to be completed in 30 days and 150 days, respectively. Material characterization activities are continuing for the remaining tiles.

More Details

Compositional ordering and stability in nanostructured, bulk thermoelectric alloys

Medlin, Douglas L.

Thermoelectric materials have many applications in the conversion of thermal energy to electrical power and in solid-state cooling. One route to improving thermoelectric energy conversion efficiency in bulk material is to embed nanoscale inclusions. This report summarize key results from a recently completed LDRD project exploring the science underpinning the formation and stability of nanostructures in bulk thermoelectric and the quantitative relationships between such structures and thermoelectric properties.

More Details

Extreme solid state refrigeration using nanostructured Bi-Te alloys

Sharma, Peter A.; Morales, Alfredo M.; Spataru, Dan C.

Materials are desperately needed for cryogenic solid state refrigeration. We have investigated nanostructured Bi-Te alloys for their potential use in Ettingshausen refrigeration to liquid nitrogen temperatures. These alloys form alternating layers of Bi{sub 2} and Bi{sub 2}Te{sub 3} blocks in equilibrium. The composition Bi{sub 4}Te{sub 3} was identified as having the greatest potential for having a high Ettingshausen figure of merit. Both single crystal and polycrystalline forms of this material were synthesized. After evaluating the Ettingshausen figure of merit for a large, high quality polycrystal, we simulated the limits of practical refrigeration in this material from 200 to 77 K using a simple device model. The band structure was also computed and compared to experiments. We discuss the crystal growth, transport physics, and practical refrigeration potential of Bi-Te alloys.

More Details

Plasmonic filters

Shaner, Eric A.; Passmore, Brandon S.; Barrick, Todd A.

Metal films perforated with subwavelength hole arrays have been show to demonstrate an effect known as Extraordinary Transmission (EOT). In EOT devices, optical transmission passbands arise that can have up to 90% transmission and a bandwidth that is only a few percent of the designed center wavelength. By placing a tunable dielectric in proximity to the EOT mesh, one can tune the center frequency of the passband. We have demonstrated over 1 micron of passive tuning in structures designed for an 11 micron center wavelength. If a suitable midwave (3-5 micron) tunable dielectric (perhaps BaTiO{sub 3}) were integrated with an EOT mesh designed for midwave operation, it is possible that a fast, voltage tunable, low temperature filter solution could be demonstrated with a several hundred nanometer passband. Such an element could, for example, replace certain components in a filter wheel solution.

More Details

Advanced optical measurements for characterizing photophysical properties of single nanoparticles

Davis, Ryan W.; Hayes, Dulce C.; Wheeler, David R.; Polsky, Ronen P.; Brozik, Susan M.

Formation of complex nanomaterials would ideally involve single-pot reaction conditions with one reactive site per nanoparticle, resulting in a high yield of incrementally modified or oriented structures. Many studies in nanoparticle functionalization have sought to generate highly uniform nanoparticles with tailorable surface chemistry necessary to produce such conjugates, with limited success. In order to overcome these limitations, we have modified commercially available nanoparticles with multiple potential reaction sites for conjugation with single ssDNAs, proteins, and small unilamellar vesicles. These approaches combined heterobifunctional and biochemical template chemistries with single molecule optical methods for improved control of nanomaterial functionalization. Several interesting analytical results have been achieved by leveraging techniques unique to SNL, and provide multiple paths for future improvements for multiplex nanoparticle synthesis and characterization. Hyperspectral imaging has proven especially useful for assaying substrate immobilized fluorescent particles. In dynamic environments, temporal correlation spectroscopies have been employed for tracking changes in diffusion/hydrodynamic radii, particle size distributions, and identifying mobile versus immobile sample fractions at unbounded dilution. Finally, Raman fingerprinting of biological conjugates has been enabled by resonant signal enhancement provided by intimate interactions with nanoparticles and composite nanoshells.

More Details

Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems

Burton, David; Tarman, Thomas D.; Van Leeuwen, Brian P.; Onunkwo, Uzoma O.; Urias, Vincent U.; McDonald, Michael J.

This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

More Details

Viscoelastic coupling of nanoelectromechanical resonators

Simonson, Robert J.; Staton, Alan W.

This report summarizes work to date on a new collaboration between Sandia National Laboratories and the California Institute of Technology (Caltech) to utilize nanoelectromechanical resonators designed at Caltech as platforms to measure the mechanical properties of polymeric materials at length scales on the order of 10-50 nm. Caltech has succeeded in reproducibly building cantilever resonators having major dimensions on the order of 2-5 microns. These devices are fabricated in pairs, with free ends separated by reproducible gaps having dimensions on the order of 10-50 nm. By controlled placement of materials that bridge the very small gap between resonators, the mechanical devices become coupled through the test material, and the transmission of energy between the devices can be monitored. This should allow for measurements of viscoelastic properties of polymeric materials at high frequency over short distances. Our work to date has been directed toward establishing this measurement capability at Sandia.

More Details

Parallel contingency statistics with Titan

Pebay, Philippe P.; Thompson, David C.

This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

More Details

Toward improved branch prediction through data mining

Hemmert, Karl S.

Data mining and machine learning techniques can be applied to computer system design to aid in optimizing design decisions, improving system runtime performance. Data mining techniques have been investigated in the context of branch prediction. Specifically, a comparison of traditional branch predictor performance has been made to data mining algorithms. Additionally, the possiblity of whether additional features available within the architectural state might serve to further improve branch prediction has been evaluated. Results show that data mining techniques indicate potential for improved branch prediction, especially when register file contents are included as a feature set.

More Details

Transmissive infrared frequency selective surfaces and infrared antennas : final report for LDRD 105749

Davids, Paul D.; Cruz-Cabrera, A.A.; Basilio, Lorena I.; Wendt, J.R.; Kemme, S.A.; Johnson, William Arthur.; Loui, Hung L.

Plasmonic structures open up new opportunities in photonic devices, sometimes offering an alternate method to perform a function and sometimes offering capabilities not possible with standard optics. In this LDRD we successfully demonstrated metal coatings on optical surfaces that do not adversely affect the transmission of those surfaces at the design frequency. This technology could be applied as an RF noise blocking layer across an optical aperture or as a method to apply an electric field to an active electro-optic device without affecting optical performance. We also demonstrated thin optical absorbers using similar patterned surfaces. These infrared optical antennas show promise as a method to improve performance in mercury cadmium telluride detectors. Furthermore, these structures could be coupled with other components to lead to direct rectification of infrared radiation. This possibility leads to a new method for infrared detection and energy harvesting of infrared radiation.

More Details

Molecular fountain

Strecker, Kevin S.; Chandler, D.W.

A molecular fountain directs slowly moving molecules against gravity to further slow them to translational energies that they can be trapped and studied. If the molecules are initially slow enough they will return some time later to the position from which they were launched. Because this round trip time can be on the order of a second a single molecule can be observed for times sufficient to perform Hz level spectroscopy. The goal of this LDRD proposal was to construct a novel Molecular Fountain apparatus capable of producing dilute samples of molecules at near zero temperatures in well-defined user-selectable, quantum states. The slowly moving molecules used in this research are produced by the previously developed Kinematic Cooling technique, which uses a crossed atomic and molecular beam apparatus to generate single rotational level molecular samples moving slowly in the laboratory reference frame. The Kinematic Cooling technique produces cold molecules from a supersonic molecular beam via single collisions with a supersonic atomic beam. A single collision of an atom with a molecule occurring at the correct energy and relative velocity can cause a small fraction of the molecules to move very slowly vertically against gravity in the laboratory. These slowly moving molecules are captured by an electrostatic hexapole guiding field that both orients and focuses the molecules. The molecules are focused into the ionization region of a time-of-flight mass spectrometer and are ionized by laser radiation. The new molecular fountain apparatus was built utilizing a new design for molecular beam apparatus that has allowed us to miniaturize the apparatus. This new design minimizes the volumes and surface area of the machine allowing smaller pumps to maintain the necessary background pressures needed for these experiments.

More Details

Final LDRD report : enhanced spontaneous emission rate in visible III-nitride LEDs using 3D photonic crystal cavities

Fischer, Arthur J.; Subramania, Ganapathi S.; Lee, Yun-Ju L.; Koleske, Daniel K.; Li, Qiming L.; Wang, George T.; Luk, Ting S.; Fullmer, Kristine W.

The fundamental spontaneous emission rate for a photon source can be modified by placing the emitter inside a periodic dielectric structure allowing the emission to be dramatically enhanced or suppressed depending on the intended application. We have investigated the relatively unexplored realm of interaction between semiconductor emitters and three dimensional photonic crystals in the visible spectrum. Although this interaction has been investigated at longer wavelengths, very little work has been done in the visible spectrum. During the course of this LDRD, we have fabricated TiO{sub 2} logpile photonic crystal structures with the shortest wavelength band gap ever demonstrated. A variety of different emitters with emission between 365 nm and 700 nm were incorporated into photonic crystal structures. Time-integrated and time-resolved photoluminescence measurements were performed to measure changes to the spontaneous emission rate. Both enhanced and suppressed emission were demonstrated and attributed to changes to the photonic density of states.

More Details

Scalable analysis tools for sensitivity analysis and UQ (3160) results

Ice, Lisa I.; Fabian, Nathan D.; Moreland, Kenneth D.; Bennett, Janine C.; Thompson, David C.; Karelitz, David B.

The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

More Details

Advanced fuel chemistry for advanced engines

Taatjes, Craig A.; Miller, James A.; Fernandes, Ravi X.; Zador, Judit Z.; Jusinski, Leonard E.

Autoignition chemistry is central to predictive modeling of many advanced engine designs that combine high efficiency and low inherent pollutant emissions. This chemistry, and especially its pressure dependence, is poorly known for fuels derived from heavy petroleum and for biofuels, both of which are becoming increasingly prominent in the nation's fuel stream. We have investigated the pressure dependence of key ignition reactions for a series of molecules representative of non-traditional and alternative fuels. These investigations combined experimental characterization of hydroxyl radical production in well-controlled photolytically initiated oxidation and a hybrid modeling strategy that linked detailed quantum chemistry and computational kinetics of critical reactions with rate-equation models of the global chemical system. Comprehensive mechanisms for autoignition generally ignore the pressure dependence of branching fractions in the important alkyl + O{sub 2} reaction systems; however we have demonstrated that pressure-dependent 'formally direct' pathways persist at in-cylinder pressures.

More Details

Approaches for scalable modeling and emulation of cyber systems : LDRD final report

Mayo, Jackson M.; Minnich, Ronald G.; Rudish, Don W.; Armstrong, Robert C.

The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

More Details
Results 73001–73200 of 96,771
Results 73001–73200 of 96,771