The United States produces only about 1/3 of the more than 20 million barrels of petroleum that it consumes daily. Oil imports into the country are roughly equivalent to the amount consumed in the transportation sector. Hence the nation in general, and the transportation sector in particular, is vulnerable to supply disruptions and price shocks. The situation is anticipated to worsen as the competition for limited global supplies increases and oil-rich nations become increasingly willing to manipulate the markets for this resource as a means to achieve political ends. The goal of this project was the development and improvement of technologies and the knowledge base necessary to produce and qualify a universal fuel from diverse feedstocks readily available in North America and elsewhere (e.g. petroleum, natural gas, coal, biomass) as a prudent and positive step towards mitigating this vulnerability. Three major focus areas, feedstock transformation, fuel formulation, and fuel characterization, were identified and each was addressed. The specific activities summarized herein were identified in consultation with industry to set the stage for collaboration. Two activities were undertaken in the area of feedstock transformation. The first activity focused on understanding the chemistry and operation of autothermal reforming, with an emphasis on understanding, and therefore preventing, soot formation. The second activity was focused on improving the economics of oxygen production, particularly for smaller operations, by integrating membrane separations with pressure swing adsorption. In the fuel formulation area, the chemistry of converting small molecules readily produced from syngas directly to fuels was examined. Consistent with the advice from industry, this activity avoided working on improving known approaches, giving it an exploratory flavor. Finally, the fuel characterization task focused on providing a direct and quantifiable comparison of diesel fuel and JP-8.
The author will describe two-photon-resonant LIF detection of CO, O, and H. Application of these techniques in flames frequently suffers from significant photolytic interferences caused by the intense UV excitation pulses required to produce measurable signal. When compared to nanosecond excitation, the use of short pulse (picosecond) excitation can significantly reduce the effect of the photolytic interference. Results of recent atomic oxygen imaging experiments using picosecond- and nanosecond-duration laser pulses will be presented, and potential improvements to CO and H imaging will be discussed.
Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for the material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.
In this work, we present two methods for solving overdetermined systems of the Time of Arrival (TOA) geolocation equations that achieve the minimum possible variance in all cases, not just when the satellites are at large equal radii. One of these techniques gives two solutions, and the other gives four solutions.
A three-component balance system has been developed and implemented to measure the forces and moments on a sub-scale missile fin model interacting with the wake and shed vortex from an upstream fin. Measurements were made from Mach 0.5 - 0.8 with both the upstream and downstream fins pitched between -5° and 10° angle of attack. The results show that the downstream fin's forces and moments are shifted from the baseline single fin values dependent on the angle of attack of the upstream fin. Mach Number had only a secondary effect and its influence was found to grow stronger as the angles of attack of the upstream and downstream fins diverged.
American Nuclear Society Embedded Topical Meeting - 2007 International Topical Meeting on Safety and Technology of Nuclear Hydrogen Production, Control, and Management
A preliminary study was conducted which considered capturing carbon dioxide from fossil-fired power plants and combining it with nuclear hydrogen in order to produce alternative liquid fuels for transportation. Among the alternative liquid hydrocarbons which can be used as fuel in internal combustion engines, the two that are most promising are methanol and ethanol. We choose these two because they are relatively simple compounds and can be used with only minor changes to the fuel systems of most automobiles today. In fact, there are some vehicles today which can operate with any combination of conventional gasoline, ethanol, or methanol. We estimated the quantity of carbon dioxide that would be emitted by fossil-fired power plants in the future. We then use this information to determine how much ethanol or methanol can be created if enough hydrogen is made available. Using the quantity of hydrogen required and the thermodynamics of the reactions involved, we estimate the nuclear power that would be needed to produce the liquid fuel. This amount of liquid fuel is then used to estimate the effect of such a program on conventional gasoline usage, need for foreign oil, and decrease in CO 2 emissions.
The dimensionless extinction coefficient, Ke, was measured for soot produced in 2m JP-8 pool fires. Light extinction and gravimetric sampling measurements were performed simultaneously at 635 and 1310nm wavelengths at three heights in the flame zone and in the overfire region. Measured average Ke values of 8.41.2 at 635nm and 8.71.1 at 1310nm in the overfire region agree well with values from 8-10 recently reported for different fuels and flame conditions. The overfire Ke values are also relatively independent of wavelength, in agreement with recent findings for JP-8 soot in smaller flames. Ke was nearly constant at 635nm for all sampling locations in the large fires. However, at 1310nm, the overfire Ke was higher than in the flame zone. Chemical analysis of physically sampled soot shows variations in carbon-to-hydrogen (C/H) ratio and polycyclic aromatic hydrocarbon (PAH) concentration that may account for the smaller Ke values measured in the flame zone. Rayleigh-Debye-Gans theory of scattering for polydisperse fractal aggregate (RDG-PFA) was applied to measured aggregate fractal dimensions and found to under-predict the extinction coefficient by 17-30% at 635nm using commonly accepted refractive indices of soot, and agreed well with the experiments using the more recently published refractive index of 1.99-0.89i. This study represents the first measurements of soot chemistry, morphology, and optical properties in the flame zone of large, fully-turbulent pool fires, and emphasizes the importance of accurate measurements of optical properties both in the flame zone and overfire regions for models of radiative transport and interpretation of laser-based diagnostics of soot volume fraction and temperature.
Cities without an early warning system of indwelling sensors can consider monitoring their networks manually, especially during times of heightened security levels. We consider the problem of calculating an optimal schedule for manual sampling in a municipal water network. Preliminary computations with a small-scale example indicate that during normal times, manual sampling can provide some benefit, but it is far inferior to an indwelling sensor network. However, given information that significantly constrains the nature of an imminent threat, manual sampling can perform as well as a small sensor network designed to handle normal threats. Copyright ASCE 2006.
Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.
In this paper we present the results of a study to quantify uncertainty in experimental modal parameters due to test set-up uncertainty, measurement uncertainty, and data analysis uncertainty. Uncertainty quantification is required to accomplish a number of tasks including model updating, model validation, and assessment of unit-tounit variation. We consider uncertainty in the modal parameters due to a number of sources including force input location/direction, force amplitude, instrumentation bias, support conditions, and the analysis method (algorithmic variation). We compute the total uncertainty due to all of these sources, and discuss the importance of proper characterization of bias errors on the total uncertainty. This uncertainty quantification was applied to modal tests designed to assess modeling capabilities for emerging designs of wind turbine blades. In an example, we show that unit-to-unit variation of the modal parameters of two nominally identical wind turbine blades is successfully assessed by performing uncertainty quantification. This study aims to demonstrate the importance of the proper pre-test design and analysis for understanding the uncertainty in modal parameters, in particular uncertainty due to bias error.
In order to predict blast damage on structures, it is current industry practice to decouple shock calculations from computational structural dynamics calculations. Pressure-time histories from experimental tests were used to assess computational models developed using a shock physics code (CTH) and a structural dynamics code (PRONTO3D). CTH was shown to be able to reproduce three independent characteristics of a blast wave: arrival time, peak overpressure, and decay time. Excellent agreement was achieved for early times, where the rigid wall assumptions used in the model analysis were valid. A one-way coupling was performed for this blast-structure interaction problem by taking the pressure-time history from the shock physics simulation and applying it to the structure at the corresponding locations in the PRONTO3D simulation to capture the structural deformation. In general, the one-way coupling was shown to be a cost-effective means of predicting the structural response when the time duration of the load was less than the response time of the structure. Therefore, the computational models were successfully evaluated for the internal blast problems studied herein.
This paper investigates methods for coupling analytical dynamic models of subcomponents with experimentally derived models in order to predict the response of the combined system, focusing on modal substructuring or Component Mode Synthesis (CMS), the experimental analog to the ubiquitous Craig-Bampton method. While the basic methods for combining experimental and analytical models have been around for many years, it appears that these are not often applied successfully. The CMS theory is presented along with a new strategy, dubbed the Maximum Rank Coordinate Choice (MRCC), that ensures that the constrained degrees of freedom can be found from the unconstrained without encountering numerical ill conditioning. The experimental modal substructuring approach is also compared with frequency response function coupling, sometimes called admittance or impedance coupling. These methods are used both to analytically remove models of a test fixture (required to include rotational degrees of freedom) and to predict the response of the coupled beams. Both rigid and elastic models for the fixture are considered. Similar results are obtained using either method although the modal substructuring method yields a more compact database and allows one to more easily interrogate the resulting system model to assure that physically meaningful results have been obtained. A method for coupling the fixture model to experimental measurements, dubbed the Modal Constraint for Fixture and Subsystem (MCFS) is presented that greatly improves the result and robustness when an elastic fixture model is used.
Like other interfaces, equilibrium grain boundaries are smooth at low temperature and rough at high temperature; however, little attention has been paid to roughening except for faceting boundaries. Using molecular dynamics simulations of face-centered cubic Ni, we studied two closely related grain boundaries with different boundary planes. In spite of their similarity, their boundary roughening temperatures differ by several hundred degrees, and boundary mobility is much larger above the roughening temperature. This has important implications for microstructural development during metallurgical processes.
We have discussed the key areas of the IR process that should not be circumvented if an organization is to achieve a high level of assurance in high-dollar, high-risk cost estimates; lessons learned; and possible solutions to improve the process. In summary, the best practices described are to do the following. Develop a corporate policy for review of cost estimates based on TPC and potential financial and reputation risk; Develop a database of qualified, experienced personnel, who can perform well as IR team members; Spell out the process for approval of review team members, including the executive approval process; Address review team availability by developing review team member alternates; Increase lead-time notice on high-dollar, high risk estimates by developing an advanced notice system with internal organizations; Improve coordination of the estimating team's responses to the review team's questions and concerns; and Develop alternatives such as representatives and electronic briefings to alleviate challenges in scheduling executives for cost estimate briefings. Each organization has its own needs, culture, and level of maturity. If you have an IR process that works, great! If not, we hope that we have sparked your interest in developing a process that works for your company. The goal is to continuously improve and further refine the process to meet the needs of both external and internal customers. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.
When a system design approach is applied to wind turbine blades, manufacturing and structural requirements are included along with aerodynamic considerations in the design optimization. The resulting system-driven design includes several innovative structural features such as flat-back airfoils, a constant thickness carbon spar-cap, and a thin, large diameter root. Subscale blades were manufactured to evaluate the as-built integrated performance. The design resulted in a 22% reduction in mass, but withstood over 300% of its design load during testing. Compressive strains of nearly 0.9% were measured in the carbon spar-cap. The test results from this and an earlier design are compared, as are finite element models of each design. Included in the analysis is a review of the acoustic emission events that were detected through the use of surface mounted microphones.
When measuring the structural dynamic response of test objects, the desired data is sometimes combined with some type of undesired periodic data. This can occur due to N-per-revolution excitation in systems with rotating components or when dither excitation is used. The response due to these (typically unmeasured) periodic excitations causes spikes in system frequency response functions (FRFs) and poor coherence. This paper describes a technique to remove these periodic components from the measured data. The data must be measured as a continuous time history which is initially processed as a single, long record. Given an initial guess for the periodic signal's fundamental frequency, an automated search will identify the actual fundamental frequency to very high accuracy. Then the fundamental and a user-specified number of harmonics are removed from the acquired data to create new time histories. These resulting time histories can then be processed using standard signal processing techniques. An example of this technique will be presented from a test where a vehicle is dithered with a fixed-frequency, sinusoidal force to linearize the behavior of the shock absorbers, while measuring the acceleration responses due to a random force applied elsewhere on the vehicle.
Partitioned global address space (PGAS) programming models have been identified as one of the few viable approaches for dealing with emerging many-core systems. These models tend to generate many small messages, which requires specific support from the network interface hardware to enable efficient execution. In the past, Cray included E-registers on the Cray T3E to support the SHMEM API; however, with the advent of multi-core processors, the balance of computation to communication capabilities has shifted toward computation. This paper explores the message rates that are achievable with multi-core processors and simplified PGAS support on a more conventional network interface. For message rate tests, we find that simple network interface hardware is more than sufficient. We also find that even typical data distributions, such as cyclic or block-cyclic, do not need specialized hardware support. Finally, we assess the impact of such support on the well known RandomAccess benchmark. (c) 2007 ACM.
In this paper, we present an optimal method for calculating turning maneuvers for an unmanned aerial vehicle (UAV) developed for ecological research. The algorithm calculates several possible solutions using vectors represented in complex notation, and selects the shortest turning path given constraints determined by the aircraft. This algorithm considers the UAV's turning capabilities, generating a two-dimensional path that is feasible for the UAV to fly. We generate a test flight path and show that the UAV is capable of following the turn maneuvers.
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, they also present significant challenges in data partitioning and load balancing. As the mesh adapts to the solution, the partitioning requirements change. By explicitly considering these dynamic conditions, the scalability for large, realistic simulations could possibly be significantly improved. Our hypothesis is that adaptive partitioning, meaning dynamic and automatic switching of partitioning techniques, based on the current run-time state, can be beneficial for these simulations. However, switching partitioners can be expensive due to differences in the algorithms' native mapping of data onto processors. We suggest forcing a uniform starting point for all included partitioners. We present a penalty-based method for determining whether switching is beneficial. We study the effects on data migration, as well as on overall cost, of using the uniform starting point and the switching-penalties to select the best partitioning algorithm, among a set of graph-based and geometric partitioning algorithms, for each adaptive time-step for four different adaptive scientific applications. The results show that data migration can be significantly reduced and that adaptive partitioning indeed can be effective for unstructured adaptive applications.
17th Annual International Symposium of the International Council on Systems Engineering, INCOSE 2007 - Systems Engineering: Key to Intelligent Enterprises
17th Annual International Symposium of the International Council on Systems Engineering, INCOSE 2007 - Systems Engineering: Key to Intelligent Enterprises
In this paper, we introduce EXACT, the EXperimental Algorithmics Computational Toolkit. EXACT is a software framework for describing, controlling, and analyzing computer experiments. It provides the experimentalist with convenient software tools to ease and organize the entire experimental process, including the description of factors and levels, the design of experiments, the control of experimental runs, the archiving of results, and analysis of results. As a case study for EXACT, we describe its interaction with FAST, the Sandia Framework for Agile Software Testing. EXACT and FAST now manage the nightly testing of several large software projects at Sandia. We also discuss EXACT's advanced features, which include a driver module that controls complex experiments such as comparisons of parallel algorithms. Copyright 2007 ACM.
Si single junction photocells manufactured by Sandia National Laboratories and commercially available multiple junction GaAs photocells were tested in a pulsed high-dose mixed gamma-neutron environment. The Si photocells were also tested in steady state gamma environment at two different dose rates.
Proceedings of SPIE - The International Society for Optical Engineering
Harrison, M.J.; Doty, F.P.
Lanthanide halide alloys have recently enabled scintillating gamma ray spectrometers comparable to room-temperature semiconductors (< 3% FWHM energy resolutions at 662keV). However brittle fracture of these materials hinders the growth of large volume crystals. Efforts to improve the strength through non-lanthanide alloy substitution, while preserving scintillation, are being pursued. Isovalent alloys nominal Ce0.9Al0.1Br 3, Ce0.9Ga0.1Br3, Ce 0.9Sc0.1Br3, Ce0.9In 0.1Br3 and Ce0.8Y0.2Br3, as well as aliovalent alloys nominal (CeBr3)0.99(CdCl 2)0.01, (CeBr3)0.99(CdBr 2)0.01, (CeBr3)0.99(ZnBr 2)0.01, (CeBr3)0.99(CaBr 2)0.01, (CeBr3)0.99(SrBr 2)0.01, (CeBr3)0.99(PbBr 2)0.01, (CeBr3)0.99(ZrBr 4)0.01, (CeBr3)0.99(HfBr 4)0.01 were prepared. All of these alloys exhibit bright fluorescence under UV excitation, with varying shifts in the spectral peaks and intensities relative to pure CeBr3. Further, these alloys scintillate when coupled to a photomultiplier tube (PMT) and exposed to 137Cs gamma rays. These data and the potential for improved crystal growth will be discussed.
Fracture of materials has a huge consequence in our daily life ranging from structural damage to loss of life. Understanding the mechanism of crack initiation and propagation in materials is very important. Great effort, both theoretically and experimentally, has been made to understand the nature of crack propagation in crystalline materials. However, crack propagation in disordered systems such as highly cross-linked polymers (e.g. epoxies) is less understood. Many composites such as carbon fibers have an epoxy matrix, and thus it is important to understand the epoxy properties by themselves. We study fracture in highly cross-linked polymer networks bonded to a solid surface using large-scale molecular dynamics simulations. An initial crack is created by forbidding bonds to occur on a fraction of the solid surface up to a crack tip. The time and length scales involved in this process dictate the use of coarse grained bead-spring model of the epoxy network. In order to avoid unwanted boundary effects, large systems of up to 300 000 particles are used. Stress-strain curves are determined for each system from tensile pull molecular dynamics simulations. We found that crack propagation and also formation of voids ahead of the crack are directly related to the network structure.
A growing number of applications involve the transmission of high-intensity laser pulses through optical fibers. Previously, our particular interests led to a series of studies on single-fiber transmission of Q-switched, 1064 nm pulses from multimode Nd:YAG lasers through step-index, multimode, fused silica fibers. The maximum pulse energy that could be transmitted through a given fiber was limited by the onset of laser-induced breakdown or damage. Breakdown at the fiber entrance face was often the first limiting process encountered, but other mechanisms were observed that could result in catastrophic damage at either fiber face, within the initial "entry" segment of the fiber, and at other internal sites along the fiber path. These studies examined system elements that can govern the relative importance of different damage mechanisms, including laser characteristics, the design and alignment of laser-to-flber injection optics, fiber end-face preparation, and fiber routing. In particular, criteria were established for injection optics in order to maximize margins between transmission requirements and thresholds for laser-induced damage. Recent interests have led us to examine laser injection into multiple fibers. Effective methods for generating multiple beams are available, but the resulting beam geometry can lead to challenges in applying the criteria for optimum injection optics. To illustrate these issues, we have examined a three-fiber injection system consisting of a beam-shaping element, a primary injection lens, and a grating beamsplitter. Damage threshold characteristics were established by testing fibers using the injection geometry imposed by this system design.
Anhydrous cerium bromide (CeBr3) and cerium doped lanthanum bromide (Ce+3-LaBr3) were obtained by the dehydration of hydrates synthesized by a direct acidification process. The dehydration process involves heating in vacuum through three phase changes - hydrate, amorphous, and crystalline LaBr3. Incomplete removal of the bound water leads to the formation of oxybromides and the partial reduction of the lanthanum at high temperatures. It was found that upon the completion of dehydration (< 200°C) a complete solid solution can be formed between LaBr3 and CeBr3. These two compounds form a simple binary phase diagram. Challenges associated with the dehydration process are discussed.
We have synthesized and tested new highly fluorescent metal organic framework (MOF) materials based on stilbene dicarboxylic acid as a linker. The crystal structure and porosity of the product are dependent on synthetic conditions and choice of solvent and a low-density cubic form has been identified by x-ray diffraction. In this work we report experiments demonstrating scintillation properties of these crystals. Bright proton-induced luminescence with large shifts relative to the fluorescence excitation spectra were recorded, peaking near 475 nm. Tolerance to fast proton radiation was evaluated by monitoring this radio-luminescence to absorbed doses of several hundred MRAD.