There is significant interest in achieving technology innovation through new product development activities. It is recognized, however, that traditional project management practices focused only on performance, cost, and schedule attributes, can often lead to risk mitigation strategies that limit new technology innovation. In this paper, a new approach is proposed for formally managing and quantifying technology innovation. This approach uses a risk-based framework that simultaneously optimizes innovation attributes along with traditional project management and system engineering attributes. To demonstrate the efficacy of the new riskbased approach, a comprehensive product development experiment was conducted. This experiment simultaneously managed the innovation risks and the product delivery risks through the proposed risk-based framework. Quantitative metrics for technology innovation were tracked and the experimental results indicate that the risk-based approach can simultaneously achieve both project deliverable and innovation objectives.
The oil of the Strategic Petroleum Reserve (SPR) represents a national response to any potential emergency or intentional restriction of crude oil supply to this country, and conforms to International Agreements to maintain such a reserve. As assurance this reserve oil will be available in a timely manner should a restriction in supply occur, the oil of the reserve must meet certain transportation criteria. The transportation criteria require that the oil does not evolve dangerous gas, either explosive or toxic, while in the process of transport to, or storage at, the destination facility. This requirement can be a challenge because the stored oil can acquire dissolved gases while in the SPR. There have been a series of reports analyzing in exceptional detail the reasons for the increases, or regains, in gas content; however, there remains some uncertainty in these explanations and an inability to predict why the regains occur. Where the regains are prohibitive and exceed the criteria, the oil must undergo degasification, where excess portions of the volatile gas are removed. There are only two known sources of gas regain, one is the salt dome formation itself which may contain gas inclusions from which gas can be released during oil processing or storage, and the second is increases of the gases release by the volatile components of the crude oil itself during storage, especially if the stored oil undergoes heating or is subject to biological generation processes. In this work, the earlier analyses are reexamined and significant alterations in conclusions are proposed. The alterations are based on how the fluid exchanges of brine and oil uptake gas released from domal salt during solutioning, and thereafter, during further exchanges of fluids. Transparency of the brine/oil interface and the transfer of gas across this interface remains an important unanswered question. The contribution from creep induced damage releasing gas from the salt surrounding the cavern is considered through computations using the Multimechanism Deformation Coupled Fracture (MDCF) model, suggesting a relative minor, but potentially significant, contribution to the regain process. Apparently, gains in gas content can be generated from the oil itself during storage because the salt dome has been heated by the geothermal gradient of the earth. The heated domal salt transfers heat to the oil stored in the caverns and thereby increases the gas released by the volatile components and raises the boiling point pressure of the oil. The process is essentially a variation on the fractionation of oil, where each of the discrete components of the oil have a discrete temperature range over which that component can be volatized and removed from the remaining components. The most volatile components are methane and ethane, the shortest chain hydrocarbons. Since this fractionation is a fundamental aspect of oil behavior, the volatile component can be removed by degassing, potentially prohibiting the evolution of gas at or below the temperature of the degas process. While this process is well understood, the ability to describe the results of degassing and subsequent regain is not. Trends are not well defined for original gas content, regain, and prescribed effects of degassing. As a result, prediction of cavern response is difficult. As a consequence of this current analysis, it is suggested that solutioning brine of the final fluid exchange of a just completed cavern, immediately prior to the first oil filling, should be analyzed for gas content using existing analysis techniques. This would add important information and clarification to the regain process. It is also proposed that the quantity of volatile components, such as methane, be determined before and after any degasification operation.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.
The ceramic nanocomposite capacitor goals are: (1) more than double energy density of ceramic capacitors (cutting size and weight by more than half); (2) potential cost reductino (factor of >4) due to decreased sintering temperature (allowing the use of lower cost electrode materials such as 70/30 Ag/Pd); and (3) lower sintering temperature will allow co-firing with other electrical components.
This report considers the calculation of the quasi-static nonlinear response of rectangular flat plates and tubes of rectangular cross-section subjected to compressive loads using quadrilateralshell finite element models. The principal objective is to assess the effect that the shell drilling stiffness parameter has on the calculated results. The calculated collapse load of elastic-plastic tubes of rectangular cross-section is of particular interest here. The drilling stiffness factor specifies the amount of artificial stiffness that is given to the shell element drilling Degree of freedom (rotation normal to the plane of the element). The element formulation has no stiffness for this degree of freedom, and this can lead to numerical difficulties. The results indicate that in the problems considered it is necessary to add a small amount of drilling tiffness to obtain converged results when using both implicit quasi-statics or explicit dynamics methods. The report concludes with a parametric study of the imperfection sensitivity of the calculated responses of the elastic-plastic tubes with rectangular cross-section.
To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.
This report evaluates the feasibility of high-level radioactive waste disposal in shale within the United States. The U.S. has many possible clay/shale/argillite basins with positive attributes for permanent disposal. Similar geologic formations have been extensively studied by international programs with largely positive results, over significant ranges of the most important material characteristics including permeability, rheology, and sorptive potential. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in shale media. We develop scoping performance analyses, based on the applicable features, events, and processes identified by international investigators, to support a generic conclusion regarding post-closure safety. Requisite assumptions for these analyses include waste characteristics, disposal concepts, and important properties of the geologic formation. We then apply lessons learned from Sandia experience on the Waste Isolation Pilot Project and the Yucca Mountain Project to develop a disposal strategy should a shale repository be considered as an alternative disposal pathway in the U.S. Disposal of high-level radioactive waste in suitable shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. Thermal-hydrologic-mechanical calculations indicate that temperatures near emplaced waste packages can be maintained below boiling and will decay to within a few degrees of the ambient temperature within a few decades (or longer depending on the waste form). Construction effects, ventilation, and the thermal pulse will lead to clay dehydration and deformation, confined to an excavation disturbed zone within a few meters of the repository, that can be reasonably characterized. Within a few centuries after waste emplacement, overburden pressures will seal fractures, resaturate the dehydrated zones, and provide a repository setting that strongly limits radionuclide movement to diffusive transport. Coupled hydrogeochemical transport calculations indicate maximum extents of radionuclide transport on the order of tens to hundreds of meters, or less, in a million years. Under the conditions modeled, a shale repository could achieve total containment, with no releases to the environment in undisturbed scenarios. The performance analyses described here are based on the assumption that long-term standards for disposal in clay/shale would be identical in the key aspects, to those prescribed for existing repository programs such as Yucca Mountain. This generic repository evaluation for shale is the first developed in the United States. Previous repository considerations have emphasized salt formations and volcanic rock formations. Much of the experience gained from U.S. repository development, such as seal system design, coupled process simulation, and application of performance assessment methodology, is applied here to scoping analyses for a shale repository. A contemporary understanding of clay mineralogy and attendant chemical environments has allowed identification of the appropriate features, events, and processes to be incorporated into the analysis. Advanced multi-physics modeling provides key support for understanding the effects from coupled processes. The results of the assessment show that shale formations provide a technically advanced, scientifically sound disposal option for the U.S.
To do effective product development, a systematic and rigorous approach to innovation is necessary. Standard models of system engineering provide that approach.
Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.
The rheology at gas-liquid interfaces strongly influences the stability and dynamics of foams and emulsions. Several experimental techniques are employed to characterize the rheology at liquid-gas interfaces with an emphasis on the non-Newtonian behavior of surfactant-laden interfaces. The focus is to relate the interfacial rheology to the foamability and foam stability of various aqueous systems. An interfacial stress rheometer (ISR) is used to measure the steady and dynamic rheology by applying an external magnetic field to actuate a magnetic needle suspended at the interface. Results are compared with those from a double wall ring attachment to a rotational rheometer (TA Instruments AR-G2). Micro-interfacial rheology (MIR) is also performed using optical tweezers to manipulate suspended microparticle probes at the interface to investigate the steady and dynamic rheology. Additionally, a surface dilatational rheometer (SDR) is used to periodically oscillate the volume of a pendant drop or buoyant bubble. Applying the Young-Laplace equation to the drop shape, a time-dependent surface tension can be calculated and used to determine the effective dilatational viscosity of an interface. Using the ISR, double wall ring, SDR, and MIR, a wide range of sensitivity in surface forces (fN to nN) can be explored as each experimental method has different sensitivities. Measurements will be compared to foam stability.
We present a new model for closing a system of Lagrangian hydrodynamics equations for a two-material cell with a single velocity model. We describe a new approach that is motivated by earlier work of Delov and Sadchikov and of Goncharov and Yanilkin. Using a linearized Riemann problem to initialize volume fraction changes, we require that each material satisfy its own pdV equation, which breaks the overall energy balance in the mixed cell. To enforce this balance, we redistribute the energy discrepancy by assuming that the corresponding pressure change in each material is equal. This multiple-material model is packaged as part of a two-step time integration scheme. We compare results of our approach with other models and with corresponding pure-material calculations, on two-material test problems with ideal-gas or stiffened-gas equations of state.
Full Wavefield Seismic Inversion (FWI) estimates a subsurface elastic model by iteratively minimizing the difference between observed and simulated data. This process is extremely compute intensive, with a cost on the order of at least hundreds of prestack reverse time migrations. For time-domain and Krylov-based frequency-domain FWI, the cost of FWI is proportional to the number of seismic sources inverted. We have found that the cost of FWI can be significantly reduced by applying it to data processed by encoding and summing individual source gathers, and by changing the encoding functions between iterations. The encoding step forms a single gather from many input source gathers. This gather represents data that would have been acquired from a spatially distributed set of sources operating simultaneously with different source signatures. We demonstrate, using synthetic data, significant cost reduction by applying FWI to encoded simultaneous-source data.
This is workshop is about methodologies, tools, techniques, models, training, codes and standards, etc., that can improve reliability of systems while reducing costs. We've intentionally scaled back on presentation time to allow more time for interaction. Sandia's PV Program Vision - Recognition as a world-class facility to develop and integrate new photovoltaic components, systems, and architectures for the future of our electric/energy delivery systems.
We demonstrate a new semantic method for automatic analysis of wide-area, high-resolution overhead imagery to tip and cue human intelligence analysts to human activity. In the open demonstration, we find and trace cars and rooftops. Our methodology, extended to analysis of voxels, may be applicable to understanding morphology and to automatic tracing of neurons in large-scale, serial-section TEM datasets. We defined an algorithm and software implementation that efficiently finds all combinations of image blobs that satisfy given shape semantics, where image blobs are formed as a general-purpose, first step that 'oversegments' image pixels into blobs of similar pixels. We will demonstrate the remarkable power (ROC) of this combinatorial-based work flow for automatically tracing any automobiles in a scene by applying semantics that require a subset of image blobs to fill out a rectangular shape, with width and height in given intervals. In most applications we find that the new combinatorial-based work flow produces alternative (overlapping) tracings of possible objects (e.g. cars) in a scene. To force an estimation (tracing) of a consistent collection of objects (cars), a quick-and-simple greedy algorithm is often sufficient. We will demonstrate a more powerful resolution method: we produce a weighted graph from the conflicts in all of our enumerated hypotheses, and then solve a maximal independent vertex set problem on this graph to resolve conflicting hypotheses. This graph computation is almost certain to be necessary to adequately resolve multiple, conflicting neuron topologies into a set that is most consistent with a TEM dataset.
This report comprises an annual summary of activities under the U.S. Strategic Petroleum Reserve (SPR) Vapor Pressure Committee in FY2009. The committee provides guidance to senior project management on the issues of crude oil vapor pressure monitoring nd mitigation. The principal objectives of the vapor pressure program are, in the event of an SPR drawdown, to minimize the impact on the environment and assure worker safety and public health from crude oil vapor emissions. The annual report reviews key program areas ncluding monitoring program status, mitigation program status, new developments in measurements and modeling, and path forward including specific recommendations on cavern sampling for the next year. The contents of this report were first presented to SPR senior anagement in December 2009, in a deliverable from the vapor pressure committee. The current SAND report is an adaptation for the Sandia technical audience.
The integration of block-copolymers (BCPs) and nanoimprint lithography (NIL) presents a novel and cost-effective approach to achieving nanoscale patterning capabilities. The authors demonstrate the fabrication of a surface-enhanced Raman scattering device using templates created by the BCP-NIL integrated method. The method utilizes a poly(styrene-block-methyl methacrylate) cylindrical-forming diblock-copolymer as a masking material to create a Si template, which is then used to perform a thermal imprint of a poly(methyl methacrylate) (PMMA) layer on a Si substrate. Au with a Cr adhesion layer was evaporated onto the patterned PMMA and the subsequent lift-off resulted in an array of nanodots. Raman spectra collected for samples of R6G on Si substrates with and without patterned nanodots showed enhancement of peak intensities due to the presence of the nanodot array. The demonstrated BCP-NIL fabrication method shows promise for cost-effective nanoscale fabrication of plasmonic and nanoelectronic devices.
The purpose of the DOE Metal Hydride Center of Excellence (MHCoE) is to develop hydrogen storage materials with engineering properties that allow the use of these materials in a way that satisfies the DOE/FreedomCAR Program system requirements for automotive hydrogen storage. The Center is a multidisciplinary and collaborative effort with technical interactions divided into two broad areas: (1) mechanisms and modeling (which provide a theoretically driven basis for pursuing new materials) and (2) materials development (in which new materials are synthesized and characterized). Driving all of this work are the hydrogen storage system specifications outlined by the FreedomCAR Program for 2010 and 2015. The organization of the MHCoE during the past year is show in Figure 1. During the past year, the technical work was divided into four project areas. The purpose of the project areas is to organize the MHCoE technical work along appropriate and flexible technical lines. The four areas summarized are: (1) Project A - Destabilized Hydrides, The objective of this project is to controllably modify the thermodynamics of hydrogen sorption reactions in light metal hydrides using hydride destabilization strategies; (2) Project B - Complex Anionic Materials, The objective is to predict and synthesize highly promising new anionic hydride materials; (3) Project C - Amides/Imides Storage Materials, The objective of Project C is to assess the viability of amides and imides (inorganic materials containing NH{sub 2} and NH moieties, respectively) for onboard hydrogen storage; and (4) Project D - Alane, AlH{sub 3}, The objective of Project D is to understand the sorption and regeneration properties of AlH{sub 3} for hydrogen storage.
Decontamination of anthrax spores in critical infrastructure (e.g., subway systems, major airports) and critical assets (e.g., the interior of aircraft) can be challenging because effective decontaminants can damage materials. Current decontamination methods require the use of highly toxic and/or highly corrosive chemical solutions because bacterial spores are very difficult to kill. Bacterial spores such as Bacillus anthracis, the infectious agent of anthrax, are one of the most resistant forms of life and are several orders of magnitude more difficult to kill than their associated vegetative cells. Remediation of facilities and other spaces (e.g., subways, airports, and the interior of aircraft) contaminated with anthrax spores currently requires highly toxic and corrosive chemicals such as chlorine dioxide gas, vapor- phase hydrogen peroxide, or high-strength bleach, typically requiring complex deployment methods. We have developed a non-toxic, non-corrosive decontamination method to kill highly resistant bacterial spores in critical infrastructure and critical assets. A chemical solution that triggers the germination process in bacterial spores and causes those spores to rapidly and completely change to much less-resistant vegetative cells that can be easily killed. Vegetative cells are then exposed to mild chemicals (e.g., low concentrations of hydrogen peroxide, quaternary ammonium compounds, alcohols, aldehydes, etc.) or natural elements (e.g., heat, humidity, ultraviolet light, etc.) for complete and rapid kill. Our process employs a novel germination solution consisting of low-cost, non-toxic and non-corrosive chemicals. We are testing both direct surface application and aerosol delivery of the solutions. A key Homeland Security need is to develop the capability to rapidly recover from an attack utilizing biological warfare agents. This project will provide the capability to rapidly and safely decontaminate critical facilities and assets to return them to normal operations as quickly as possible, sparing significant economic damage by re-opening critical facilities more rapidly and safely. Facilities and assets contaminated with Bacillus anthracis (i.e., anthrax) spores can be decontaminated with mild chemicals as compared to the harsh chemicals currently needed. Both the 'germination' solution and the 'kill' solution are constructed of 'off-the-shelf,' inexpensive chemicals. The method can be utilized by directly spraying the solutions onto exposed surfaces or by application of the solutions as aerosols (i.e., small droplets), which can also reach hidden surfaces.
Photovoltaic (PV) system performance models are relied upon to provide accurate predictions of energy production for proposed and existing PV systems under a wide variety of environmental conditions. Ground based meteorological measurements are only available from a relatively small number of locations. In contrast, satellite-based radiation and weather data (e.g., SUNY database) are becoming increasingly available for most locations in North America, Europe, and Asia on a 10 x 10 km grid or better. This paper presents a study of how PV performance model results are affected when satellite-based weather data is used in place of ground-based measurements.
Los Alamos and Sandia National Laboratories have formed a new high performance computing center, the Alliance for Computing at the Extreme Scale (ACES). The two labs will jointly architect, develop, procure and operate capability systems for DOE's Advanced Simulation and Computing Program. This presentation will discuss a petascale production capability system, Cielo, that will be deployed in late 2010, and a new partnership with Cray on advanced interconnect technologies.
Improving the thermal performance of a trough plant will lower the LCOE: (1) Improve mirror alignment using the TOPCAT system - Current - increase optical intercept of existing trough solar power plants, Future - allows larger apertures with same receiver size in new trough solar power plants, and Increased concentration ratios/collection efficiencies & economies of scale; and (2) Improve tracking using a closed loop tracking system - Open loop tracking currently used own experience and from industry show need for a improved method. Performance testing of a Trough module and/or receiver on the rotating platform: (1) Installed costs of a trough plant are high. A significant portion of this is the material and assembly cost of the trough module. These costs need to be reduced without sacrificing performance; and (2) New receiver coatings with lower heat loss and higher absorbtivity. TOPCAT system is an optical evaluation tool for parabolic trough solar collectors. Aspects of the TOPCAT system are: (1) Practical, rapid, and cost effective; (2) Inherently aligns mirrors to the receiver of an entire solar collector array (SCA); (3) Can be used for existing installations -no equivalent tool exits; (4) Can be used during production; (5) Currently can be used on LS-2 or LS-3 configurations, but can be easily modified for any configuration; and (6)Generally, one time use.