The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.
The ceramic nanocomposite capacitor goals are: (1) more than double energy density of ceramic capacitors (cutting size and weight by more than half); (2) potential cost reductino (factor of >4) due to decreased sintering temperature (allowing the use of lower cost electrode materials such as 70/30 Ag/Pd); and (3) lower sintering temperature will allow co-firing with other electrical components.
This report considers the calculation of the quasi-static nonlinear response of rectangular flat plates and tubes of rectangular cross-section subjected to compressive loads using quadrilateralshell finite element models. The principal objective is to assess the effect that the shell drilling stiffness parameter has on the calculated results. The calculated collapse load of elastic-plastic tubes of rectangular cross-section is of particular interest here. The drilling stiffness factor specifies the amount of artificial stiffness that is given to the shell element drilling Degree of freedom (rotation normal to the plane of the element). The element formulation has no stiffness for this degree of freedom, and this can lead to numerical difficulties. The results indicate that in the problems considered it is necessary to add a small amount of drilling tiffness to obtain converged results when using both implicit quasi-statics or explicit dynamics methods. The report concludes with a parametric study of the imperfection sensitivity of the calculated responses of the elastic-plastic tubes with rectangular cross-section.
To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.
This report evaluates the feasibility of high-level radioactive waste disposal in shale within the United States. The U.S. has many possible clay/shale/argillite basins with positive attributes for permanent disposal. Similar geologic formations have been extensively studied by international programs with largely positive results, over significant ranges of the most important material characteristics including permeability, rheology, and sorptive potential. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in shale media. We develop scoping performance analyses, based on the applicable features, events, and processes identified by international investigators, to support a generic conclusion regarding post-closure safety. Requisite assumptions for these analyses include waste characteristics, disposal concepts, and important properties of the geologic formation. We then apply lessons learned from Sandia experience on the Waste Isolation Pilot Project and the Yucca Mountain Project to develop a disposal strategy should a shale repository be considered as an alternative disposal pathway in the U.S. Disposal of high-level radioactive waste in suitable shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. Thermal-hydrologic-mechanical calculations indicate that temperatures near emplaced waste packages can be maintained below boiling and will decay to within a few degrees of the ambient temperature within a few decades (or longer depending on the waste form). Construction effects, ventilation, and the thermal pulse will lead to clay dehydration and deformation, confined to an excavation disturbed zone within a few meters of the repository, that can be reasonably characterized. Within a few centuries after waste emplacement, overburden pressures will seal fractures, resaturate the dehydrated zones, and provide a repository setting that strongly limits radionuclide movement to diffusive transport. Coupled hydrogeochemical transport calculations indicate maximum extents of radionuclide transport on the order of tens to hundreds of meters, or less, in a million years. Under the conditions modeled, a shale repository could achieve total containment, with no releases to the environment in undisturbed scenarios. The performance analyses described here are based on the assumption that long-term standards for disposal in clay/shale would be identical in the key aspects, to those prescribed for existing repository programs such as Yucca Mountain. This generic repository evaluation for shale is the first developed in the United States. Previous repository considerations have emphasized salt formations and volcanic rock formations. Much of the experience gained from U.S. repository development, such as seal system design, coupled process simulation, and application of performance assessment methodology, is applied here to scoping analyses for a shale repository. A contemporary understanding of clay mineralogy and attendant chemical environments has allowed identification of the appropriate features, events, and processes to be incorporated into the analysis. Advanced multi-physics modeling provides key support for understanding the effects from coupled processes. The results of the assessment show that shale formations provide a technically advanced, scientifically sound disposal option for the U.S.
To do effective product development, a systematic and rigorous approach to innovation is necessary. Standard models of system engineering provide that approach.
Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.
The rheology at gas-liquid interfaces strongly influences the stability and dynamics of foams and emulsions. Several experimental techniques are employed to characterize the rheology at liquid-gas interfaces with an emphasis on the non-Newtonian behavior of surfactant-laden interfaces. The focus is to relate the interfacial rheology to the foamability and foam stability of various aqueous systems. An interfacial stress rheometer (ISR) is used to measure the steady and dynamic rheology by applying an external magnetic field to actuate a magnetic needle suspended at the interface. Results are compared with those from a double wall ring attachment to a rotational rheometer (TA Instruments AR-G2). Micro-interfacial rheology (MIR) is also performed using optical tweezers to manipulate suspended microparticle probes at the interface to investigate the steady and dynamic rheology. Additionally, a surface dilatational rheometer (SDR) is used to periodically oscillate the volume of a pendant drop or buoyant bubble. Applying the Young-Laplace equation to the drop shape, a time-dependent surface tension can be calculated and used to determine the effective dilatational viscosity of an interface. Using the ISR, double wall ring, SDR, and MIR, a wide range of sensitivity in surface forces (fN to nN) can be explored as each experimental method has different sensitivities. Measurements will be compared to foam stability.
We present a new model for closing a system of Lagrangian hydrodynamics equations for a two-material cell with a single velocity model. We describe a new approach that is motivated by earlier work of Delov and Sadchikov and of Goncharov and Yanilkin. Using a linearized Riemann problem to initialize volume fraction changes, we require that each material satisfy its own pdV equation, which breaks the overall energy balance in the mixed cell. To enforce this balance, we redistribute the energy discrepancy by assuming that the corresponding pressure change in each material is equal. This multiple-material model is packaged as part of a two-step time integration scheme. We compare results of our approach with other models and with corresponding pure-material calculations, on two-material test problems with ideal-gas or stiffened-gas equations of state.
Full Wavefield Seismic Inversion (FWI) estimates a subsurface elastic model by iteratively minimizing the difference between observed and simulated data. This process is extremely compute intensive, with a cost on the order of at least hundreds of prestack reverse time migrations. For time-domain and Krylov-based frequency-domain FWI, the cost of FWI is proportional to the number of seismic sources inverted. We have found that the cost of FWI can be significantly reduced by applying it to data processed by encoding and summing individual source gathers, and by changing the encoding functions between iterations. The encoding step forms a single gather from many input source gathers. This gather represents data that would have been acquired from a spatially distributed set of sources operating simultaneously with different source signatures. We demonstrate, using synthetic data, significant cost reduction by applying FWI to encoded simultaneous-source data.
This is workshop is about methodologies, tools, techniques, models, training, codes and standards, etc., that can improve reliability of systems while reducing costs. We've intentionally scaled back on presentation time to allow more time for interaction. Sandia's PV Program Vision - Recognition as a world-class facility to develop and integrate new photovoltaic components, systems, and architectures for the future of our electric/energy delivery systems.
We demonstrate a new semantic method for automatic analysis of wide-area, high-resolution overhead imagery to tip and cue human intelligence analysts to human activity. In the open demonstration, we find and trace cars and rooftops. Our methodology, extended to analysis of voxels, may be applicable to understanding morphology and to automatic tracing of neurons in large-scale, serial-section TEM datasets. We defined an algorithm and software implementation that efficiently finds all combinations of image blobs that satisfy given shape semantics, where image blobs are formed as a general-purpose, first step that 'oversegments' image pixels into blobs of similar pixels. We will demonstrate the remarkable power (ROC) of this combinatorial-based work flow for automatically tracing any automobiles in a scene by applying semantics that require a subset of image blobs to fill out a rectangular shape, with width and height in given intervals. In most applications we find that the new combinatorial-based work flow produces alternative (overlapping) tracings of possible objects (e.g. cars) in a scene. To force an estimation (tracing) of a consistent collection of objects (cars), a quick-and-simple greedy algorithm is often sufficient. We will demonstrate a more powerful resolution method: we produce a weighted graph from the conflicts in all of our enumerated hypotheses, and then solve a maximal independent vertex set problem on this graph to resolve conflicting hypotheses. This graph computation is almost certain to be necessary to adequately resolve multiple, conflicting neuron topologies into a set that is most consistent with a TEM dataset.
This report comprises an annual summary of activities under the U.S. Strategic Petroleum Reserve (SPR) Vapor Pressure Committee in FY2009. The committee provides guidance to senior project management on the issues of crude oil vapor pressure monitoring nd mitigation. The principal objectives of the vapor pressure program are, in the event of an SPR drawdown, to minimize the impact on the environment and assure worker safety and public health from crude oil vapor emissions. The annual report reviews key program areas ncluding monitoring program status, mitigation program status, new developments in measurements and modeling, and path forward including specific recommendations on cavern sampling for the next year. The contents of this report were first presented to SPR senior anagement in December 2009, in a deliverable from the vapor pressure committee. The current SAND report is an adaptation for the Sandia technical audience.
The integration of block-copolymers (BCPs) and nanoimprint lithography (NIL) presents a novel and cost-effective approach to achieving nanoscale patterning capabilities. The authors demonstrate the fabrication of a surface-enhanced Raman scattering device using templates created by the BCP-NIL integrated method. The method utilizes a poly(styrene-block-methyl methacrylate) cylindrical-forming diblock-copolymer as a masking material to create a Si template, which is then used to perform a thermal imprint of a poly(methyl methacrylate) (PMMA) layer on a Si substrate. Au with a Cr adhesion layer was evaporated onto the patterned PMMA and the subsequent lift-off resulted in an array of nanodots. Raman spectra collected for samples of R6G on Si substrates with and without patterned nanodots showed enhancement of peak intensities due to the presence of the nanodot array. The demonstrated BCP-NIL fabrication method shows promise for cost-effective nanoscale fabrication of plasmonic and nanoelectronic devices.
The purpose of the DOE Metal Hydride Center of Excellence (MHCoE) is to develop hydrogen storage materials with engineering properties that allow the use of these materials in a way that satisfies the DOE/FreedomCAR Program system requirements for automotive hydrogen storage. The Center is a multidisciplinary and collaborative effort with technical interactions divided into two broad areas: (1) mechanisms and modeling (which provide a theoretically driven basis for pursuing new materials) and (2) materials development (in which new materials are synthesized and characterized). Driving all of this work are the hydrogen storage system specifications outlined by the FreedomCAR Program for 2010 and 2015. The organization of the MHCoE during the past year is show in Figure 1. During the past year, the technical work was divided into four project areas. The purpose of the project areas is to organize the MHCoE technical work along appropriate and flexible technical lines. The four areas summarized are: (1) Project A - Destabilized Hydrides, The objective of this project is to controllably modify the thermodynamics of hydrogen sorption reactions in light metal hydrides using hydride destabilization strategies; (2) Project B - Complex Anionic Materials, The objective is to predict and synthesize highly promising new anionic hydride materials; (3) Project C - Amides/Imides Storage Materials, The objective of Project C is to assess the viability of amides and imides (inorganic materials containing NH{sub 2} and NH moieties, respectively) for onboard hydrogen storage; and (4) Project D - Alane, AlH{sub 3}, The objective of Project D is to understand the sorption and regeneration properties of AlH{sub 3} for hydrogen storage.
Decontamination of anthrax spores in critical infrastructure (e.g., subway systems, major airports) and critical assets (e.g., the interior of aircraft) can be challenging because effective decontaminants can damage materials. Current decontamination methods require the use of highly toxic and/or highly corrosive chemical solutions because bacterial spores are very difficult to kill. Bacterial spores such as Bacillus anthracis, the infectious agent of anthrax, are one of the most resistant forms of life and are several orders of magnitude more difficult to kill than their associated vegetative cells. Remediation of facilities and other spaces (e.g., subways, airports, and the interior of aircraft) contaminated with anthrax spores currently requires highly toxic and corrosive chemicals such as chlorine dioxide gas, vapor- phase hydrogen peroxide, or high-strength bleach, typically requiring complex deployment methods. We have developed a non-toxic, non-corrosive decontamination method to kill highly resistant bacterial spores in critical infrastructure and critical assets. A chemical solution that triggers the germination process in bacterial spores and causes those spores to rapidly and completely change to much less-resistant vegetative cells that can be easily killed. Vegetative cells are then exposed to mild chemicals (e.g., low concentrations of hydrogen peroxide, quaternary ammonium compounds, alcohols, aldehydes, etc.) or natural elements (e.g., heat, humidity, ultraviolet light, etc.) for complete and rapid kill. Our process employs a novel germination solution consisting of low-cost, non-toxic and non-corrosive chemicals. We are testing both direct surface application and aerosol delivery of the solutions. A key Homeland Security need is to develop the capability to rapidly recover from an attack utilizing biological warfare agents. This project will provide the capability to rapidly and safely decontaminate critical facilities and assets to return them to normal operations as quickly as possible, sparing significant economic damage by re-opening critical facilities more rapidly and safely. Facilities and assets contaminated with Bacillus anthracis (i.e., anthrax) spores can be decontaminated with mild chemicals as compared to the harsh chemicals currently needed. Both the 'germination' solution and the 'kill' solution are constructed of 'off-the-shelf,' inexpensive chemicals. The method can be utilized by directly spraying the solutions onto exposed surfaces or by application of the solutions as aerosols (i.e., small droplets), which can also reach hidden surfaces.
Photovoltaic (PV) system performance models are relied upon to provide accurate predictions of energy production for proposed and existing PV systems under a wide variety of environmental conditions. Ground based meteorological measurements are only available from a relatively small number of locations. In contrast, satellite-based radiation and weather data (e.g., SUNY database) are becoming increasingly available for most locations in North America, Europe, and Asia on a 10 x 10 km grid or better. This paper presents a study of how PV performance model results are affected when satellite-based weather data is used in place of ground-based measurements.
Los Alamos and Sandia National Laboratories have formed a new high performance computing center, the Alliance for Computing at the Extreme Scale (ACES). The two labs will jointly architect, develop, procure and operate capability systems for DOE's Advanced Simulation and Computing Program. This presentation will discuss a petascale production capability system, Cielo, that will be deployed in late 2010, and a new partnership with Cray on advanced interconnect technologies.
Improving the thermal performance of a trough plant will lower the LCOE: (1) Improve mirror alignment using the TOPCAT system - Current - increase optical intercept of existing trough solar power plants, Future - allows larger apertures with same receiver size in new trough solar power plants, and Increased concentration ratios/collection efficiencies & economies of scale; and (2) Improve tracking using a closed loop tracking system - Open loop tracking currently used own experience and from industry show need for a improved method. Performance testing of a Trough module and/or receiver on the rotating platform: (1) Installed costs of a trough plant are high. A significant portion of this is the material and assembly cost of the trough module. These costs need to be reduced without sacrificing performance; and (2) New receiver coatings with lower heat loss and higher absorbtivity. TOPCAT system is an optical evaluation tool for parabolic trough solar collectors. Aspects of the TOPCAT system are: (1) Practical, rapid, and cost effective; (2) Inherently aligns mirrors to the receiver of an entire solar collector array (SCA); (3) Can be used for existing installations -no equivalent tool exits; (4) Can be used during production; (5) Currently can be used on LS-2 or LS-3 configurations, but can be easily modified for any configuration; and (6)Generally, one time use.
Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides an empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback.
Swarms of earthquakes and/or aftershock sequences can dramatically increase the level of seismicity in a region for a period of time lasting from days to months, depending on the swarm or sequence. Such occurrences can provide a large amount of useful information to seismologists. For those who monitor seismic events for possible nuclear explosions, however, these swarms/sequences are a nuisance. In an explosion monitoring system, each event must be treated as a possible nuclear test until it can be proven, to a high degree of confidence, not to be. Seismic events recorded by the same station with highly correlated waveforms almost certainly have a similar location and source type, so clusters of events within a swarm can quickly be identified as earthquakes. We have developed a number of tools that can be used to exploit the high degree of waveform similarity expected to be associated with swarms/sequences. Dendro Tool measures correlations between known events. The Waveform Correlation Detector is intended to act as a detector, finding events in raw data which correlate with known events. The Self Scanner is used to find all correlated segments within a raw data steam and does not require an event library. All three techniques together provide an opportunity to study the similarities of events in an aftershock sequence in different ways. To comprehensively characterize the benefits and limits of waveform correlation techniques, we studied 3 aftershock sequences, using our 3 tools, at multiple stations. We explored the effects of station distance and event magnitudes on correlation results. Lastly, we show the reduction in detection threshold and analyst workload offered by waveform correlation techniques compared to STA/LTA based detection. We analyzed 4 days of data from each aftershock sequence using all three methods. Most known events clustered in a similar manner across the toolsets. Up to 25% of catalogued events were found to be a member of a cluster. In addition, the Waveform Correlation Detector and Self Scanner identified significant numbers of new events that were not in either the EDR or regional catalogs, showing a lowering of the detection threshold. We extended our analysis to study the effect of distance on correlation results by applying the analysis tools to multiple stations along a transect of nearly constant azimuth when possible. We expected the number of events found via correlation would drop off as roughly 1/r2, where r is the distance from mainshock to station. However, we found that regional geological conditions influenced the performance of a given station more than distance. For example, for one sequence we clustered 25% of events at the nearest station to the mainshock (34 km), while our performance dropped to 2% at a station 550 km distant. However, we matched our best performance (25% clustering) at a station 198 km distant.
As high energy laser systems evolve towards higher energies, fundamental material properties such as the laser-induced damage threshold (LIDT) of the optics limit the overall system performance. The Z-Backlighter Laser Facility at Sandia National Laboratories uses a pair of such kiljoule-class Nd:Phosphate Glass lasers for x-ray radiography of high energy density physics events on the Z-Accelerator. These two systems, the Z-Beamlet system operating at 527nm/ 1ns and the Z-Petawatt system operating at 1054nm/ 0.5ps, can be combined for some experimental applications. In these scenarios, dichroic beam combining optics and subsequent dual wavelength high reflectors will see a high fluence from combined simultaneous laser exposure and may even see lingering effects when used for pump-probe configurations. Only recently have researchers begun to explore such concerns, looking at individual and simultaneous exposures of optics to 1064 and third harmonic 355nm light from Nd:YAG [1]. However, to our knowledge, measurements of simultaneous and delayed dual wavelength damage thresholds on such optics have not been performed for exposure to 1054nm and its second harmonic light, especially when the pulses are of disparate pulse duration. The Z-Backlighter Facility has an instrumented damage tester setup to examine the issues of laser-induced damage thresholds in a variety of such situations [2] . Using this damage tester, we have measured the LIDT of dual wavelength high reflectors at 1054nm/0.5ps and 532nm/7ns, separately and spatially combined, both co-temporal and delayed, with single and multiple exposures. We found that the LIDT of the sample at 1054nm/0.5ps can be significantly lowered, from 1.32J/cm{sup 2} damage fluence with 1054/0.5ps only to 1.05 J/cm{sup 2} with the simultaneous presence of 532nm/7ns laser light at a fluence of 8.1 J/cm{sup 2}. This reduction of LIDT of the sample at 1054nm/0.5ps continues as the fluence of 532nm/7ns laser light simultaneously present increases. The reduction of LIDT does not occur when the 2 pulses are temporally separated. This paper will also present dual wavelength LIDT results of commercial dichroic beam-combining optics simultaneously exposed with laser light at 1054nm/2.5ns and 532nm/7ns.
Adagio is a three-dimensional, implicit solid mechanics code with a versatile element library, nonlinear material models, and capabilities for modeling large deformation and contact. Adagio is a parallel code, and its nonlinear solver and contact capabilities enable scalable solutions of large problems. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. The Adagio 4.16 User's Guide provides information about the functionality in Adagio and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Adagio is similar to that of the code Presto [3]. Presto, like Adagio, is a solid mechanics code built on the SIERRA Framework. The primary difference between the two codes is that Presto uses explicit time integration for transient dynamics analysis, whereas Adagio is an implicit code. Because of the similarities in input and usage between Adagio and Presto, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Presto may be found in the Adagio user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.
Presto is a three-dimensional transient dynamics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. The Presto 4.16 User's Guide provides information about the functionality in Presto and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Presto is similar to that of the code Adagio [3]. Adagio is a three-dimensional quasi-static code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. Adagio, like Presto, is built on the SIERRA Framework [1]. Contact capabilities for Adagio are also parallel and scalable. A significant feature of Adagio is that it offers a multilevel, nonlinear iterative solver. Because of the similarities in input and usage between Presto and Adagio, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Adagio may be found in the Presto user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.
Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.
Monolithic photonic integrated circuits (PICs) have a long history reaching back more than 40 years. During that time, and particularly in the past 15 years, the technology has matured and the application space grown to span sophisticated tunable diode lasers, 40 Gb/s electrical-to-optical signal converters with complex data formats, wavelength multiplexors and routers, as well as chemical/biological sensors. Most of this activity has centered in recent years on optical circuits built on either Silicon or InP substrates. This talk will review the three classes of PIC and highlight the unique strengths, and weaknesses, of PICs based on Silicon and InP substrates. Examples will be provided from recent R&D activity.
It is known that, in general, the correlation structure in the joint distribution of model parameters is critical to the uncertainty analysis of that model. Very often, however, studies in the literature only report nominal values for parameters inferred from data, along with confidence intervals for these parameters, but no details on the correlation or full joint distribution of these parameters. When neither posterior nor data are available, but only summary statistics such as nominal values and confidence intervals, a joint PDF must be chosen. Given the summary statistics it may not be reasonable nor necessary to assume the parameters are independent random variables. We demonstrate, using a Bayesian inference procedure, how to construct a posterior density for the parameters exhibiting self consistent correlations, in the absence of data, given (1) the fit-model, (2) nominal parameter values, (3) bounds on the parameters, and (4) a postulated statistical model, around the fit-model, for the missing data. Our approach ensures external Bayesian updating while marginalizing over possible data realizations. We then address the matching of given parameter bounds through the choice of hyperparameters, which are introduced in postulating the statistical model, but are not given nominal values. We discuss some possible approaches, including (1) inferring them in a separate Bayesian inference loop and (2) optimization. We also perform an empirical evaluation of the algorithm showing the posterior obtained with this data free inference compares well with the true posterior obtained from inference against the full data set.
One of the authors previously conjectured that the wrinkling of propagating fronts by weak random advection increases the bulk propagation rate (turbulent burning velocity) in proportion to the 4/3 power of the advection strength. An exact derivation of this scaling is reported. The analysis shows that the coefficient of this scaling is equal to the energy density of a lower-dimensional Burgers fluid with a white-in-time forcing whose spatial structure is expressed in terms of the spatial autocorrelation of the flow that advects the front. The replica method of field theory has been used to derive an upper bound on the coefficient as a function of the spatial autocorrelation. High precision numerics show that the bound is usefully sharp. Implications for strongly advected fronts (e.g., turbulent flames) are noted.
Recent work suggests that cloud effects remain one of the largest sources of uncertainty in model-based estimates of climate sensitivity. In particular, the entrainment rate in stratocumulus-topped mixed layers needs better models. More than thirty years ago a clever laboratory experiment was conducted by McEwan and Paltridge to examine an analog of the entrainment process at the top of stratiform clouds. Sayler and Breidenthal extended this pioneering work and determined the effect of the Richardson number on the dimensionless entrainment rate. The experiments gave hints that the interaction between molecular effects and the one-sided turbulence seems to be crucial for understanding entrainment. From the numerical point of view large-eddy simulation (LES) does not allow explicitly resolving all the fine scale processes at the entrainment interface. Direct numerical simulation (DNS) is limited due to the Reynolds number and is not the tool of choice for parameter studies. Therefore it is useful to investigate new modeling strategies, such as stochastic turbulence models which allow sufficient resolution at least in one dimension while having acceptable run times. We will present results of the One-Dimensional Turbulence stochastic simulation model applied to the experimental setup of Sayler and Breidenthal. The results on radiatively induced entrainment follow quite well the scaling of the entrainment rate with the Richardson number that was experimentally found for a set of trials. Moreover, we investigate the influence of molecular effects, the fluids optical properties, and the artifact of parasitic turbulence experimentally observed in the laminar layer. In the simulations the parameters are varied systematically for even larger ranges than in the experiment. Based on the obtained results a more complex parameterization of the entrainment rate than currently discussed in the literature seems to be necessary.
We report air filamentation by a 1550 nm subpicosecond pulse. During filamentation, the continuum generated was less than expected. A large amount of third harmonic was also generated.
In this paper, we report the progress made in our project recently funded by the US Department of Energy (DOE) toward developing a computational capability, which includes a two-phase, three-dimensional PEM (polymer electrolyte membrane) fuel cell model and its coupling with DAKOTA (a design and optimization toolkit developed and being enhanced by Sandia National Laboratories). We first present a brief literature survey in which the prominent/notable PEM fuel cell models developed by various researchers or groups are reviewed. Next, we describe the two-phase, three-dimensional PEM fuel cell model being developed, tested, and later validated by experimental data. Results from case studies are presented to illustrate the utility of our comprehensive, integrated cell model. The coupling between the PEM fuel cell model and DAKOTA is briefly discussed. Our efforts in this DOE-funded project are focused on developing a validated computational capability that can be employed for PEM fuel cell design and optimization.
This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at least two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.
In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.
Results of several experiments aimed at remedying photoresist adhesion failure during spray wet chemical etching of InGaP/GaAs NPN HBTs are reported. Several factors were identified that could influence adhesion and a Design of Experiment (DOE) approach was used to study the effects and interactions of selected factors. The most significant adhesion improvement identified is the incorporation of a native oxide etch immediately prior to the photoresist coat. In addition to improving adhesion, this pre-coat treatment also alters the wet etch profile of (100) GaAs so that the reaction limited etch is more isotropic compared to wafers without surface treatment; the profiles have a positive taper in both the [011] and [011] directions, but the taper angles are not identical. The altered profiles have allowed us to predictably yield fully probe-able HBTs with 5 x 5 {micro}m emitters using 5200 {angstrom} evaporated metal without planarization.
Scoping studies have demonstrated that ceragenins, when linked to water-treatment membranes have the potential to create biofouling resistant water-treatment membranes. Ceragenins are synthetically produced molecules that mimic antimicrobial peptides. Evidence includes measurements of CSA-13 prohibiting the growth of and killing planktonic Pseudomonas fluorescens. In addition, imaging of biofilms that were in contact of a ceragenin showed more dead cells relative to live cells than in a biofilm that had not been treated with a ceragenin. This work has demonstrated that ceragenins can be attached to polyamide reverse osmosis (RO) membranes, though work needs to improve the uniformity of the attachment. Finally, methods have been developed to use hyperspectral imaging with multivariate curve resolution to view ceragenins attached to the RO membrane. Future work will be conducted to better attach the ceragenin to the RO membranes and more completely test the biocidal effectiveness of the ceragenins on the membranes.
Objectives of the Office of Energy Efficiency and Renewable Energy (EERE) 2009-2010 Studies (Solar, Wind, Geothermal, & Combustion Engine R&D) are to: (1) Demonstrate to investors that EERE research and technology development (R&D) programs & subprograms are 'Worth It'; (2) Develop an improved Benefit-Cost methodology for determining realized economic and other benefits of EERE R&D programs - (a) Model government additionality more thoroughly and on a case-by-case basis; (b) Move beyond economic benefits; and (c) Have each study calculate returns to a whole EERE program/subprogram; and (3) Develop a consistent, workable Methods Guide for independent contractors who will perform the evaluation studies.
This paper describes the development and implementation of an integrated resistor process based on reactively sputtered tantalum nitride. Image reversal lithography was shown to be a superior method for liftoff patterning of these films. The results of a response surface DOE for the sputter deposition of the films are discussed. Several approaches to stabilization baking were examined and the advantages of the hot plate method are shown. In support of a new capability to produce special-purpose HBT-based Small-Scale Integrated Circuits (SSICs), we developed our existing TaN resistor process, designed for research prototyping, into one with greater maturity and robustness. Included in this work was the migration of our TaN deposition process from a research-oriented tool to a tool more suitable for production. Also included was implementation and optimization of a liftoff process for the sputtered TaN to avoid the complicating effects of subtractive etching over potentially sensitive surfaces. Finally, the method and conditions for stabilization baking of the resistors was experimentally determined to complete the full implementation of the resistor module. Much of the work to be described involves the migration between sputter deposition tools - from a Kurt J. Lesker CMS-18 to a Denton Discovery 550. Though they use nominally the same deposition technique (reactive sputtering of Ta with N{sup +} in a RF-excited Ar plasma), they differ substantially in their design and produce clearly different results in terms of resistivity, conformity of the film and the difference between as-deposited and stabilized films. We will describe the design of and results from the design of experiments (DOE)-based method of process optimization on the new tool and compare this to what had been used on the old tool.
Most far-field optical imaging systems rely on a lens and spatially-resolved detection to probe distinct locations on the object. We describe and demonstrate a novel high-speed wide-field approach to imaging that instead measures the complex spatial Fourier transform of the object by detecting its spatially-integrated response to dynamic acousto-optically synthesized structured illumination. Tomographic filtered backprojection is applied to reconstruct the object in two or three dimensions. This technique decouples depth-of-field and working-distance from resolution, in contrast to conventional imaging, and can be used to image biological and synthetic structures in fluoresced or scattered light employing coherent or broadband illumination. We discuss the electronically programmable transfer function of the optical system and its implications for imaging dynamic processes. Finally, we present for the first time two-dimensional high-resolution image reconstructions demonstrating a three-orders-of-magnitude improvement in depth-of-field over conventional lens-based microscopy.
We consider the problem of placing sensors in a municipal water network when we can choose both the location of sensors and the sensitivity and specificity of the contamination warning system. Sensor stations in a municipal water distribution network continuously send sensor output information to a centralized computing facility, and event detection systems at the control center determine when to signal an anomaly worthy of response. Although most sensor placement research has assumed perfect anomaly detection, signal analysis software has parameters that control the tradeoff between false alarms and false negatives. We describe a nonlinear sensor placement formulation, which we heuristically optimize with a linear approximation that can be solved as a mixed-integer linear program. We report the results of initial experiments on a real network and discuss tradeoffs between early detection of contamination incidents, and control of false alarms.
Since the energy storage technology market is in a relatively emergent phase, narrowing the gap between pilot project status and commercialization is fundamental to the accelerating of this innovative market space. This session will explore regional market design factors to facilitate the storage enterprise. You will also hear about: quantifying transmission and generation efficiency enhancements; resource planning for storage; and assessing market mechanisms to accelerate storage adoption regionally.
We will discuss general mathematical ideas arising in the problems of Laser beam shaping and splitting. We will be particularly concerned with questions concerning the scaling and symmetry of such systems.
A two-dimensional, multi-physics computational model based on the finite-element method is developed for simulating the process of solar thermochemical splitting of carbon dioxide (CO{sub 2}) using ferrites (Fe{sub 3}O{sub 4}/FeO) and a counter-rotating-ring receiver/recuperator or CR5, in which carbon monoxide (CO) is produced from gaseous CO{sub 2}. The model takes into account heat transfer, gas-phase flow and multiple-species diffusion in open channels and through pores of the porous reactant layer, and redox chemical reactions at the gas/solid interfaces. Results (temperature distribution, velocity field, and species concentration contours) computed using the model in a case study are presented to illustrate model utility. The model is then employed to examine the effects of injection rates of CO{sub 2} and argon neutral gas, respectively, on CO production rate and the extent of the product-species crossover.