Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.
The purpose of this report is to develop a project management plan for maintaining and monitoring liquid radioactive waste tanks at Iraq's Al-Tuwaitha Nuclear Research Center. Based on information from several sources, the Al-Tuwaitha site has approximately 30 waste tanks that contain varying amounts of liquid or sludge radioactive waste. All of the tanks have been non-operational for over 20 years and most have limited characterization. The program plan embodied in this document provides guidance on conducting radiological surveys, posting radiation control areas and controlling access, performing tank hazard assessments to remove debris and gain access, and conducting routine tank inspections. This program plan provides general advice on how to sample and characterize tank contents, and how to prioritize tanks for soil sampling and borehole monitoring.
Pressure-shear experiments were performed on granular tungsten carbide and sand using a newly-refurbished slotted barrel gun. The sample is a thin layer of the granular material sandwiched between driver and anvil plates that remain elastic. Because of the obliquity, impact generates both a longitudinal wave, which compresses the sample, and a shear wave that probes the strength of the sample. Laser velocity interferometry is employed to measure the velocity history of the free surface of the anvil. Since the driver and anvil remain elastic, analysis of the results is, in principal, straightforward. Experiments were performed at pressures up to nearly 2 GPa using titanium plates and at higher pressure using zirconium plates. Those done with the titanium plates produced values of shear stress of 0.1-0.2 GPa, with the value increasing with pressure. On the other hand, those experiments conducted with zirconia anvils display results that may be related to slipping at an interface and shear stresses mostly at 0.1 GPa or less. Recovered samples display much greater particle fracture than is observed in planar loading, suggesting that shearing is a very effective mechanism for comminution of the grains.
A reference design and operational procedures for the disposal of high-level radioactive waste in deep boreholes have been developed and documented. The design and operations are feasible with currently available technology and meet existing safety and anticipated regulatory requirements. Objectives of the reference design include providing a baseline for more detailed technical analyses of system performance and serving as a basis for comparing design alternatives. Numerous factors suggest that deep borehole disposal of high-level radioactive waste is inherently safe. Several lines of evidence indicate that groundwater at depths of several kilometers in continental crystalline basement rocks has long residence times and low velocity. High salinity fluids have limited potential for vertical flow because of density stratification and prevent colloidal transport of radionuclides. Geochemically reducing conditions in the deep subsurface limit the solubility and enhance the retardation of key radionuclides. A non-technical advantage that the deep borehole concept may offer over a repository concept is that of facilitating incremental construction and loading at multiple perhaps regional locations. The disposal borehole would be drilled to a depth of 5,000 m using a telescoping design and would be logged and tested prior to waste emplacement. Waste canisters would be constructed of carbon steel, sealed by welds, and connected into canister strings with high-strength connections. Waste canister strings of about 200 m length would be emplaced in the lower 2,000 m of the fully cased borehole and be separated by bridge and cement plugs. Sealing of the upper part of the borehole would be done with a series of compacted bentonite seals, cement plugs, cement seals, cement plus crushed rock backfill, and bridge plugs. Elements of the reference design meet technical requirements defined in the study. Testing and operational safety assurance requirements are also defined. Overall, the results of the reference design development and the cost analysis support the technical feasibility of the deep borehole disposal concept for high-level radioactive waste.
Planar shock experiments were conducted on granular tungsten carbide (WC) and tantalum oxide (Ta{sub 2}O{sub 5}) using the Z machine and a 2-stage gas gun. Additional shock experiments were also conducted on a nearly fully dense form of Ta{sub 2}O{sub 5}. The experiments on WC yield some of the highest pressure results for granular materials obtained to date. Because of the high distention of Ta{sub 2}O{sub 5}, the pressures obtained were significantly lower, but the very high temperatures generated led to large contributions of thermal energy to the material response. These experiments demonstrate that the Z machine can be used to obtain accurate shock data on granular materials. The data on Ta{sub 2}O{sub 5} were utilized in making improvements to the P-{lambda} model for high pressures; the model is found to capture the results not only of the Z and gas gun experiments but also those from laser experiments on low density aerogels. The results are also used to illustrate an approach for generating an equation of state using only the limited data coming from nanoindentation. Although the EOS generated in this manner is rather simplistic, for this material it gives reasonably good results.
We investigate Bayesian techniques that can be used to reconstruct field variables from partial observations. In particular, we target fields that exhibit spatial structures with a large spectrum of lengthscales. Contemporary methods typically describe the field on a grid and estimate structures which can be resolved by it. In contrast, we address the reconstruction of grid-resolved structures as well as estimation of statistical summaries of subgrid structures, which are smaller than the grid resolution. We perform this in two different ways (a) via a physical (phenomenological), parameterized subgrid model that summarizes the impact of the unresolved scales at the coarse level and (b) via multiscale finite elements, where specially designed prolongation and restriction operators establish the interscale link between the same problem defined on a coarse and fine mesh. The estimation problem is posed as a Bayesian inverse problem. Dimensionality reduction is performed by projecting the field to be inferred on a suitable orthogonal basis set, viz. the Karhunen-Loeve expansion of a multiGaussian. We first demonstrate our techniques on the reconstruction of a binary medium consisting of a matrix with embedded inclusions, which are too small to be grid-resolved. The reconstruction is performed using an adaptive Markov chain Monte Carlo method. We find that the posterior distributions of the inferred parameters are approximately Gaussian. We exploit this finding to reconstruct a permeability field with long, but narrow embedded fractures (which are too fine to be grid-resolved) using scalable ensemble Kalman filters; this also allows us to address larger grids. Ensemble Kalman filtering is then used to estimate the values of hydraulic conductivity and specific yield in a model of the High Plains Aquifer in Kansas. Strong conditioning of the spatial structure of the parameters and the non-linear aspects of the water table aquifer create difficulty for the ensemble Kalman filter. We conclude with a demonstration of the use of multiscale stochastic finite elements to reconstruct permeability fields. This method, though computationally intensive, is general and can be used for multiscale inference in cases where a subgrid model cannot be constructed.
Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.
Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to the earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.
The overarching goal of this Truman LDRD project was to explore mechanisms of thermal transport at interfaces of nanomaterials, specifically linking the thermal conductivity and thermal boundary conductance to the structures and geometries of interfaces and boundaries. Deposition, fabrication, and post possessing procedures of nanocomposites and devices can give rise to interatomic mixing around interfaces of materials leading to stresses and imperfections that could affect heat transfer. An understanding of the physics of energy carrier scattering processes and their response to interfacial disorder will elucidate the potentials of applying these novel materials to next-generation high powered nanodevices and energy conversion applications. An additional goal of this project was to use the knowledge gained from linking interfacial structure to thermal transport in order to develop avenues to control, or 'tune' the thermal transport in nanosystems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher J.; Phillips, Tyrone S.
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Network measurement is a discipline that provides the techniques to collect data that are fundamental to many branches of computer science. While many capturing tools and comparisons have made available in the literature and elsewhere, the impact of these packet capturing tools on existing processes have not been thoroughly studied. While not a concern for collection methods in which dedicated servers are used, many usage scenarios of packet capturing now requires the packet capturing tool to run concurrently with operational processes. In this work we perform experimental evaluations of the performance impact that packet capturing process have on web-based services; in particular, we observe the impact on web servers. We find that packet capturing processes indeed impact the performance of web servers, but on a multi-core system the impact varies depending on whether the packet capturing and web hosting processes are co-located or not. In addition, the architecture and behavior of the web server and process scheduling is coupled with the behavior of the packet capturing process, which in turn also affect the web server's performance.
Nuclear fuel reprocessing plants contain a wealth of plant monitoring data including material measurements, process monitoring, administrative procedures, and physical protection elements. Future facilities are moving in the direction of highly-integrated plant monitoring systems that make efficient use of the plant data to improve monitoring and reduce costs. The Separations and Safeguards Performance Model (SSPM) is an analysis tool that is used for modeling advanced monitoring systems and to determine system response under diversion scenarios. This report both describes the architecture for such a future monitoring system and present results under various diversion scenarios. Improvements made in the past year include the development of statistical tests for detecting material loss, the integration of material balance alarms to improve physical protection, and the integration of administrative procedures. The SSPM has been used to demonstrate how advanced instrumentation (as developed in the Material Protection, Accounting, and Control Technologies campaign) can benefit the overall safeguards system as well as how all instrumentation is tied into the physical protection system. This concept has the potential to greatly improve the probability of detection for both abrupt and protracted diversion of nuclear material.
In this project, we developed a confined cooperative self-assembly process to synthesize one-dimensional (1D) j-aggregates including nanowires and nanorods with controlled diameters and aspect ratios. The facile and versatile aqueous solution process assimilates photo-active macrocyclic building blocks inside surfactant micelles, forming stable single-crystalline high surface area nanoporous frameworks with well-defined external morphology defined by the building block packing. Characterizations using TEM, SEM, XRD, N{sub 2} and NO sorption isotherms, TGA, UV-vis spectroscopy, and fluorescence imaging and spectroscopy indicate that the j-aggregate nanostructures are monodisperse and may further assemble into hierarchical arrays with multi-modal functional pores. The nanostructures exhibit enhanced and collective optical properties over the individual chromophores. This project was a small footprint research effort which, nonetheless, produced significant progress towards both the stated goal as well as unanticipated research directions.
Thermal detection has made extensive progress in the last 40 years, however, the speed and detectivity can still be improved. The advancement of silicon photonic microring resonators has made them intriguing for detection devices due to their small size and high quality factors. Implementing silicon photonic microring or microdisk resonators as a means of a thermal detector gives rise to higher speed and detectivity, as well as lower noise compared to conventional devices with electrical readouts. This LDRD effort explored the design and measurements of silicon photonic microdisk resonators used for thermal detection. The characteristic values, consisting of the thermal time constant ({tau} {approx} 2 ms) and noise equivalent power were measured and found to surpass the performance of the best microbolometers. Furthermore the detectivity was found to be D{sub {lambda}} = 2.47 x 10{sup 8} cm {center_dot} {radical}Hz/W at 10.6 {mu}m which is comparable to commercial detectors. Subsequent design modifications should increase the detectivity by another order of magnitude. Thermal detection in the terahertz (THz) remains underdeveloped, opening a door for new innovative technologies such as metamaterial enhanced detectors. This project also explored the use of metamaterials in conjunction with a cantilever design for detection in the THz region and demonstrated the use of metamaterials as custom thin film absorbers for thermal detection. While much work remains to integrate these technologies into a unified platform, the early stages of research show promising futures for use in thermal detection.
This report describes the laboratory directed research and development work to model relevant areas of the brain that associate multi-modal information for long-term storage for the purpose of creating a more effective, and more automated, association mechanism to support rapid decision making. Using the biology and functionality of the hippocampus as an analogy or inspiration, we have developed an artificial neural network architecture to associate k-tuples (paired associates) of multimodal input records. The architecture is composed of coupled unimodal self-organizing neural modules that learn generalizations of unimodal components of the input record. Cross modal associations, stored as a higher-order tensor, are learned incrementally as these generalizations form. Graph algorithms are then applied to the tensor to extract multi-modal association networks formed during learning. Doing so yields a novel approach to data mining for knowledge discovery. This report describes the neurobiological inspiration, architecture, and operational characteristics of our model, and also provides a real world terrorist network example to illustrate the model's functionality.