There is significant interest in achieving technology innovation through new product development activities. It is recognized, however, that traditional project management practices focused only on performance, cost, and schedule attributes, can often lead to risk mitigation strategies that limit new technology innovation. In this paper, a new approach is proposed for formally managing and quantifying technology innovation. This approach uses a risk-based framework that simultaneously optimizes innovation attributes along with traditional project management and system engineering attributes. To demonstrate the efficacy of the new riskbased approach, a comprehensive product development experiment was conducted. This experiment simultaneously managed the innovation risks and the product delivery risks through the proposed risk-based framework. Quantitative metrics for technology innovation were tracked and the experimental results indicate that the risk-based approach can simultaneously achieve both project deliverable and innovation objectives.
The oil of the Strategic Petroleum Reserve (SPR) represents a national response to any potential emergency or intentional restriction of crude oil supply to this country, and conforms to International Agreements to maintain such a reserve. As assurance this reserve oil will be available in a timely manner should a restriction in supply occur, the oil of the reserve must meet certain transportation criteria. The transportation criteria require that the oil does not evolve dangerous gas, either explosive or toxic, while in the process of transport to, or storage at, the destination facility. This requirement can be a challenge because the stored oil can acquire dissolved gases while in the SPR. There have been a series of reports analyzing in exceptional detail the reasons for the increases, or regains, in gas content; however, there remains some uncertainty in these explanations and an inability to predict why the regains occur. Where the regains are prohibitive and exceed the criteria, the oil must undergo degasification, where excess portions of the volatile gas are removed. There are only two known sources of gas regain, one is the salt dome formation itself which may contain gas inclusions from which gas can be released during oil processing or storage, and the second is increases of the gases release by the volatile components of the crude oil itself during storage, especially if the stored oil undergoes heating or is subject to biological generation processes. In this work, the earlier analyses are reexamined and significant alterations in conclusions are proposed. The alterations are based on how the fluid exchanges of brine and oil uptake gas released from domal salt during solutioning, and thereafter, during further exchanges of fluids. Transparency of the brine/oil interface and the transfer of gas across this interface remains an important unanswered question. The contribution from creep induced damage releasing gas from the salt surrounding the cavern is considered through computations using the Multimechanism Deformation Coupled Fracture (MDCF) model, suggesting a relative minor, but potentially significant, contribution to the regain process. Apparently, gains in gas content can be generated from the oil itself during storage because the salt dome has been heated by the geothermal gradient of the earth. The heated domal salt transfers heat to the oil stored in the caverns and thereby increases the gas released by the volatile components and raises the boiling point pressure of the oil. The process is essentially a variation on the fractionation of oil, where each of the discrete components of the oil have a discrete temperature range over which that component can be volatized and removed from the remaining components. The most volatile components are methane and ethane, the shortest chain hydrocarbons. Since this fractionation is a fundamental aspect of oil behavior, the volatile component can be removed by degassing, potentially prohibiting the evolution of gas at or below the temperature of the degas process. While this process is well understood, the ability to describe the results of degassing and subsequent regain is not. Trends are not well defined for original gas content, regain, and prescribed effects of degassing. As a result, prediction of cavern response is difficult. As a consequence of this current analysis, it is suggested that solutioning brine of the final fluid exchange of a just completed cavern, immediately prior to the first oil filling, should be analyzed for gas content using existing analysis techniques. This would add important information and clarification to the regain process. It is also proposed that the quantity of volatile components, such as methane, be determined before and after any degasification operation.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.
The ceramic nanocomposite capacitor goals are: (1) more than double energy density of ceramic capacitors (cutting size and weight by more than half); (2) potential cost reductino (factor of >4) due to decreased sintering temperature (allowing the use of lower cost electrode materials such as 70/30 Ag/Pd); and (3) lower sintering temperature will allow co-firing with other electrical components.
This report considers the calculation of the quasi-static nonlinear response of rectangular flat plates and tubes of rectangular cross-section subjected to compressive loads using quadrilateralshell finite element models. The principal objective is to assess the effect that the shell drilling stiffness parameter has on the calculated results. The calculated collapse load of elastic-plastic tubes of rectangular cross-section is of particular interest here. The drilling stiffness factor specifies the amount of artificial stiffness that is given to the shell element drilling Degree of freedom (rotation normal to the plane of the element). The element formulation has no stiffness for this degree of freedom, and this can lead to numerical difficulties. The results indicate that in the problems considered it is necessary to add a small amount of drilling tiffness to obtain converged results when using both implicit quasi-statics or explicit dynamics methods. The report concludes with a parametric study of the imperfection sensitivity of the calculated responses of the elastic-plastic tubes with rectangular cross-section.
To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.
This report evaluates the feasibility of high-level radioactive waste disposal in shale within the United States. The U.S. has many possible clay/shale/argillite basins with positive attributes for permanent disposal. Similar geologic formations have been extensively studied by international programs with largely positive results, over significant ranges of the most important material characteristics including permeability, rheology, and sorptive potential. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in shale media. We develop scoping performance analyses, based on the applicable features, events, and processes identified by international investigators, to support a generic conclusion regarding post-closure safety. Requisite assumptions for these analyses include waste characteristics, disposal concepts, and important properties of the geologic formation. We then apply lessons learned from Sandia experience on the Waste Isolation Pilot Project and the Yucca Mountain Project to develop a disposal strategy should a shale repository be considered as an alternative disposal pathway in the U.S. Disposal of high-level radioactive waste in suitable shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. Thermal-hydrologic-mechanical calculations indicate that temperatures near emplaced waste packages can be maintained below boiling and will decay to within a few degrees of the ambient temperature within a few decades (or longer depending on the waste form). Construction effects, ventilation, and the thermal pulse will lead to clay dehydration and deformation, confined to an excavation disturbed zone within a few meters of the repository, that can be reasonably characterized. Within a few centuries after waste emplacement, overburden pressures will seal fractures, resaturate the dehydrated zones, and provide a repository setting that strongly limits radionuclide movement to diffusive transport. Coupled hydrogeochemical transport calculations indicate maximum extents of radionuclide transport on the order of tens to hundreds of meters, or less, in a million years. Under the conditions modeled, a shale repository could achieve total containment, with no releases to the environment in undisturbed scenarios. The performance analyses described here are based on the assumption that long-term standards for disposal in clay/shale would be identical in the key aspects, to those prescribed for existing repository programs such as Yucca Mountain. This generic repository evaluation for shale is the first developed in the United States. Previous repository considerations have emphasized salt formations and volcanic rock formations. Much of the experience gained from U.S. repository development, such as seal system design, coupled process simulation, and application of performance assessment methodology, is applied here to scoping analyses for a shale repository. A contemporary understanding of clay mineralogy and attendant chemical environments has allowed identification of the appropriate features, events, and processes to be incorporated into the analysis. Advanced multi-physics modeling provides key support for understanding the effects from coupled processes. The results of the assessment show that shale formations provide a technically advanced, scientifically sound disposal option for the U.S.
To do effective product development, a systematic and rigorous approach to innovation is necessary. Standard models of system engineering provide that approach.