Publications

Results 73101–73125 of 99,299

Search results

Jump to search filters

Quantification of technology innovation using a risk-based framework

World Academy of Science, Engineering and Technology

Sleefe, Gerard E.

There is significant interest in achieving technology innovation through new product development activities. It is recognized, however, that traditional project management practices focused only on performance, cost, and schedule attributes, can often lead to risk mitigation strategies that limit new technology innovation. In this paper, a new approach is proposed for formally managing and quantifying technology innovation. This approach uses a risk-based framework that simultaneously optimizes innovation attributes along with traditional project management and system engineering attributes. To demonstrate the efficacy of the new riskbased approach, a comprehensive product development experiment was conducted. This experiment simultaneously managed the innovation risks and the product delivery risks through the proposed risk-based framework. Quantitative metrics for technology innovation were tracked and the experimental results indicate that the risk-based approach can simultaneously achieve both project deliverable and innovation objectives.

More Details

Comparison of nBn and nBp mid-wave barrier infrared photodetectors

Proceedings of SPIE - The International Society for Optical Engineering

Klem, John F.; Kim, Jin K.; Cich, M.J.; Hawkins, Samuel D.; Fortune, T.R.; Rienstra, Jeffrey L.

We have fabricated mid-wave infrared photodetectors containing InAsSb absorber regions and AlAsSb barriers in n-barrier-n (nBn) and n-barrier-p (nBp) configurations, and characterized them by current-voltage, photocurrent, and capacitance-voltage measurements in the 100-200 K temperature range. Efficient collection of photocurrent in the nBn structure requires application of a small reverse bias resulting in a minimum dark current, while the nBp devices have high responsivity at zero bias. When biasing both types of devices for equal dark currents, the nBn structure exhibits a differential resistance significantly higher than the nBp, although the nBp device may be biased for arbitrarily low dark current at the expense of much lower dynamic resistance. Capacitance-voltage measurements allow determination of the electron concentration in the unintentionally-doped absorber material, and demonstrate the existence of an electron accumulation layer at the absorber/barrier interface in the nBn device. Numerical simulations of idealized nBn devices demonstrate that photocurrent collection is possible under conditions of minimal absorber region depletion, thereby strongly suppressing depletion region Shockley-Read-Hall generation. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

Engineering optical forces in waveguides and cavities based on optical response

Proceedings of SPIE - The International Society for Optical Engineering

Rakich, Peter T.; Wang, Zheng; Popović, Miloš A.

We present a new treatment of optical forces, revealing that the forces in virtually all optomechanically variable systems can be computed exactly and simply from only the optical phase and amplitude response of the system. This treatment, termed the response theory of optical forces (or RTOF), provides conceptual clarity to the essential physics of optomechanical systems, which computationally intensive Maxwell stress-tensor analyses leave obscured, enabling the construction simple models with which optical forces and trapping potentials can be synthesized based on the optical response of optomechanical systems. A theory of optical forces, based on the optical response of systems, is advantageous since the phase and amplitude response of virtually any optomechanical system (involving waveguides, ring resonators or photonic crystals) can be derived, with relative ease, through well-established analytical theories. In contrast, conventional Maxwell stress tensor methods require the computation of complex 3-dimensional electromagnetic field distributions; making a theory for the synthesis of optical forces exceedingly difficult. Through numerous examples, we illustrate that the optical forces generated in complex waveguide and microcavity systems can be computed exactly through use of analytical scattering-matrix methods. When compared with Maxwell stress-tensor methods of force computation, perfect agreement is found. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

Evaluation of arsenazo III as a contrast agent for photoacoustic detection of micromolar calcium transients

Progress in Biomedical Optics and Imaging - Proceedings of SPIE

Cooley, Erika J.; Kruizinga, Pieter; Branch, Darren W.; Emelianov, Stanislav

Elucidating the role of calcium fluctuations at the cellular level is essential to gain insight into more complex signaling and metabolic activity within tissues. Recent developments in optical monitoring of calcium transients suggest that cells integrate and transmit information through large networks. Thus, monitoring calcium transients in these populations is important for identifying normal and pathological states of a variety of systems. Though optical techniques can be used to image calcium fluxes using fluorescent probes, depth penetration limits the information that can be acquired from tissues in vivo. Alternatively, the calcium-sensitive dye arsenazo III is useful for optical techniques that rely on absorption of light rather than fluorescence for image contrast. We report on the use of arsenazo III for detection of calcium using photoacoustics, a deeply penetrating imaging technique in which an ultrasound signal is generated following localized absorption of light. The absorbance properties of the dye in the presence of calcium were measured directly using UV-Vis spectrophotometry. For photoacoustic studies, a phantom was constructed to monitor the change in absorbance of 25 μM arsenazo III at 680 nm in the presence of calcium. Subsequent results demonstrated a linear increase in photoacoustic signal as calcium in the range of 1 - 20 μM complexed with the dye, followed by saturation of the signal as increasing amounts of calcium were added. For delivery of the dye to tissue preparations, a liposomal carrier was fabricated and characterized. This work demonstrates the feasibility of using arsenazo III for photoacoustic monitoring of calcium transients in vivo. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

High speed optical filtering using active resonant subwavelength gratings

Proceedings of SPIE - The International Society for Optical Engineering

Gin, A.V.; Kemme, Shanalyn A.; Boye, Robert; Peters, David; Ihlefeld, Jon F.; Briggs, Ronald D.; Wendt, Joel R.; Ellis, A.R.; Marshall, L.H.; Carter, T.R.; Hunker, J.D.; Samora, S.

In this work, we describe the most recent progress towards the device modeling, fabrication, testing and system integration of active resonant subwavelength grating (RSG) devices. Passive RSG devices have been a subject of interest in subwavelength-structured surfaces (SWS) in recent years due to their narrow spectral response and high quality filtering performance. Modulating the bias voltage of interdigitated metal electrodes over an electrooptic thin film material enables the RSG components to act as actively tunable high-speed optical filters. The filter characteristics of the device can be engineered using the geometry of the device grating and underlying materials. Using electron beam lithography and specialized etch techniques, we have fabricated interdigitated metal electrodes on an insulating layer and BaTiO3 thin film on sapphire substrate. With bias voltages of up to 100V, spectral red shifts of several nanometers are measured, as well as significant changes in the reflected and transmitted signal intensities around the 1.55um wavelength. Due to their small size and lack of moving parts, these devices are attractive for high speed spectral sensing applications. We will discuss the most recent device testing results as well as comment on the system integration aspects of this project. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

Distributed Sensor Fusion in Water Quality Event Detection

Journal of Water Resources Planning and Management

Koch, Mark W.; Mckenna, Sean A.

To protect drinking water systems, a contamination warning system can use in-line sensors to indicate possible accidental and deliberate contamination. Currently, reporting of an incident occurs when data from a single station detects an anomaly. This paper proposes an approach for combining data from multiple stations to reduce false background alarms. By considering the location and time of individual detections as points resulting from a random space-time point process, Kulldorff's scan test can find statistically significant clusters of detections. Using EPANET to simulate contaminant plumes of varying sizes moving through a water network with varying amounts of sensing nodes, it is shown that the scan test can detect significant clusters of events. Also, these significant clusters can reduce the false alarms resulting from background noise and the clusters can help indicate the time and source location of the contaminant. Fusion of monitoring station results within a moderately sized network show false alarm errors are reduced by three orders of magnitude using the scan test. © 2011 ASCE.

More Details

Observations on vapor pressure in SPR caverns : sources

Munson, Darrell E.

The oil of the Strategic Petroleum Reserve (SPR) represents a national response to any potential emergency or intentional restriction of crude oil supply to this country, and conforms to International Agreements to maintain such a reserve. As assurance this reserve oil will be available in a timely manner should a restriction in supply occur, the oil of the reserve must meet certain transportation criteria. The transportation criteria require that the oil does not evolve dangerous gas, either explosive or toxic, while in the process of transport to, or storage at, the destination facility. This requirement can be a challenge because the stored oil can acquire dissolved gases while in the SPR. There have been a series of reports analyzing in exceptional detail the reasons for the increases, or regains, in gas content; however, there remains some uncertainty in these explanations and an inability to predict why the regains occur. Where the regains are prohibitive and exceed the criteria, the oil must undergo degasification, where excess portions of the volatile gas are removed. There are only two known sources of gas regain, one is the salt dome formation itself which may contain gas inclusions from which gas can be released during oil processing or storage, and the second is increases of the gases release by the volatile components of the crude oil itself during storage, especially if the stored oil undergoes heating or is subject to biological generation processes. In this work, the earlier analyses are reexamined and significant alterations in conclusions are proposed. The alterations are based on how the fluid exchanges of brine and oil uptake gas released from domal salt during solutioning, and thereafter, during further exchanges of fluids. Transparency of the brine/oil interface and the transfer of gas across this interface remains an important unanswered question. The contribution from creep induced damage releasing gas from the salt surrounding the cavern is considered through computations using the Multimechanism Deformation Coupled Fracture (MDCF) model, suggesting a relative minor, but potentially significant, contribution to the regain process. Apparently, gains in gas content can be generated from the oil itself during storage because the salt dome has been heated by the geothermal gradient of the earth. The heated domal salt transfers heat to the oil stored in the caverns and thereby increases the gas released by the volatile components and raises the boiling point pressure of the oil. The process is essentially a variation on the fractionation of oil, where each of the discrete components of the oil have a discrete temperature range over which that component can be volatized and removed from the remaining components. The most volatile components are methane and ethane, the shortest chain hydrocarbons. Since this fractionation is a fundamental aspect of oil behavior, the volatile component can be removed by degassing, potentially prohibiting the evolution of gas at or below the temperature of the degas process. While this process is well understood, the ability to describe the results of degassing and subsequent regain is not. Trends are not well defined for original gas content, regain, and prescribed effects of degassing. As a result, prediction of cavern response is difficult. As a consequence of this current analysis, it is suggested that solutioning brine of the final fluid exchange of a just completed cavern, immediately prior to the first oil filling, should be analyzed for gas content using existing analysis techniques. This would add important information and clarification to the regain process. It is also proposed that the quantity of volatile components, such as methane, be determined before and after any degasification operation.

More Details

Relation of validation experiments to applications

Numerical Heat Transfer, Part B: Fundamentals

Hamilton, J.R.; Hills, Richard G.

Model validation efforts often use a suite of experiments to provide data to test models for predictive use for a targeted application. A question that naturally arises is Does the experimental suite provide data to adequately test the target application model? The goal of this article is to develop methodology to partially address this question. The methodology utilizes computational models for the individual test suite experiments and for the target application, to assess coverage. The impact of uncertainties in model parameters on the assessment is addressed. Simple linear and nonlinear heat conduction examples of the methodology are provided. Copyright © Taylor & Francis Group, LLC.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual

Adams, Brian M.; Dalbey, Keith; Eldred, Michael; Gay, David M.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen; Hough, Patricia D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual

Adams, Brian M.; Dalbey, Keith; Eldred, Michael; Gay, David M.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen; Hough, Patricia D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual

Adams, Brian M.; Dalbey, Keith; Eldred, Michael; Gay, David M.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen; Hough, Patricia D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing

Bartlett, Roscoe

More Details

Development of low-cost, compact, reliable, high energy density ceramic nanocomposite capacitors

Monson, Todd; Diantonio, Christopher; Winter, Michael R.; Huber, Dale L.; Roesler, Alexander; Chavez, Thomas P.; Stevens, Tyler E.; Vreeland, Erika

The ceramic nanocomposite capacitor goals are: (1) more than double energy density of ceramic capacitors (cutting size and weight by more than half); (2) potential cost reductino (factor of >4) due to decreased sintering temperature (allowing the use of lower cost electrode materials such as 70/30 Ag/Pd); and (3) lower sintering temperature will allow co-firing with other electrical components.

More Details

Effect of shell drilling stiffness on response calculations of rectangular plates and tubes of rectangular cross-section under compression

Corona, Edmundo; Gearhart, Jhana S.; Hales, Jason D.

This report considers the calculation of the quasi-static nonlinear response of rectangular flat plates and tubes of rectangular cross-section subjected to compressive loads using quadrilateralshell finite element models. The principal objective is to assess the effect that the shell drilling stiffness parameter has on the calculated results. The calculated collapse load of elastic-plastic tubes of rectangular cross-section is of particular interest here. The drilling stiffness factor specifies the amount of artificial stiffness that is given to the shell element drilling Degree of freedom (rotation normal to the plane of the element). The element formulation has no stiffness for this degree of freedom, and this can lead to numerical difficulties. The results indicate that in the problems considered it is necessary to add a small amount of drilling tiffness to obtain converged results when using both implicit quasi-statics or explicit dynamics methods. The report concludes with a parametric study of the imperfection sensitivity of the calculated responses of the elastic-plastic tubes with rectangular cross-section.

More Details

A global 3D P-Velocity model of the Earth%3CU%2B2019%3Es crust and mantle for improved event location

Ballard, Sanford; Young, Christopher J.; Hipp, James R.; Chang, Marcus C.; Encarnacao, Andre V.; Lewis, Jennifer E.

To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.

More Details

Shale disposal of U.S. high-level radioactive waste

Hansen, Francis D.; Gaither, Katherine N.; Sobolik, Steven; Cygan, Randall T.; Hardin, Ernest; Rechard, Robert P.; Freeze, Geoffrey; Sassani, David C.; Brady, Patrick V.; Stone, Charles M.; Martinez, Mario J.; Dewers, Thomas

This report evaluates the feasibility of high-level radioactive waste disposal in shale within the United States. The U.S. has many possible clay/shale/argillite basins with positive attributes for permanent disposal. Similar geologic formations have been extensively studied by international programs with largely positive results, over significant ranges of the most important material characteristics including permeability, rheology, and sorptive potential. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in shale media. We develop scoping performance analyses, based on the applicable features, events, and processes identified by international investigators, to support a generic conclusion regarding post-closure safety. Requisite assumptions for these analyses include waste characteristics, disposal concepts, and important properties of the geologic formation. We then apply lessons learned from Sandia experience on the Waste Isolation Pilot Project and the Yucca Mountain Project to develop a disposal strategy should a shale repository be considered as an alternative disposal pathway in the U.S. Disposal of high-level radioactive waste in suitable shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. Thermal-hydrologic-mechanical calculations indicate that temperatures near emplaced waste packages can be maintained below boiling and will decay to within a few degrees of the ambient temperature within a few decades (or longer depending on the waste form). Construction effects, ventilation, and the thermal pulse will lead to clay dehydration and deformation, confined to an excavation disturbed zone within a few meters of the repository, that can be reasonably characterized. Within a few centuries after waste emplacement, overburden pressures will seal fractures, resaturate the dehydrated zones, and provide a repository setting that strongly limits radionuclide movement to diffusive transport. Coupled hydrogeochemical transport calculations indicate maximum extents of radionuclide transport on the order of tens to hundreds of meters, or less, in a million years. Under the conditions modeled, a shale repository could achieve total containment, with no releases to the environment in undisturbed scenarios. The performance analyses described here are based on the assumption that long-term standards for disposal in clay/shale would be identical in the key aspects, to those prescribed for existing repository programs such as Yucca Mountain. This generic repository evaluation for shale is the first developed in the United States. Previous repository considerations have emphasized salt formations and volcanic rock formations. Much of the experience gained from U.S. repository development, such as seal system design, coupled process simulation, and application of performance assessment methodology, is applied here to scoping analyses for a shale repository. A contemporary understanding of clay mineralogy and attendant chemical environments has allowed identification of the appropriate features, events, and processes to be incorporated into the analysis. Advanced multi-physics modeling provides key support for understanding the effects from coupled processes. The results of the assessment show that shale formations provide a technically advanced, scientifically sound disposal option for the U.S.

More Details
Results 73101–73125 of 99,299
Results 73101–73125 of 99,299