A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations when data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.
The authors present designs of quasi-spherical direction drive z-pinch loads for machines such as ZR at 28 MA load current with a 150 ns implosion time (QSDDI). A double shell system for ZR has produced a 2D simulated yield of 12 MJ, but the drive for this system on ZR has essentially no margin. A double shell system for a 56 MA driver at 150 ns implosion has produced a simulated yield of 130 MJ with considerable margin in attaining the necessary temperature and density-radius product for ignition. They also represent designs for a magnetically insulated current amplifier, (MICA), that modify the attainable ZR load current to 36 MA with a 28 ns rise time. The faster pulse provided by a MICA makes it possible to drive quasi-spherical single shell implosions (QSDD2). They present results from 1D LASNEX and 2D MACH2 simulations of promising low-adiabat cryogenic QSDD2 capsules and 1D LASNEX results of high-adiabat cryogenic QSDD2 capsules.
The development of tools for complex dynamic security systems is not a straight forward engineering task but, rather, a scientific task where discovery of new scientific principles and math is necessary. For years, scientists have observed complex behavior but have had difficulty understanding it. Prominent examples include: insect colony organization, the stock market, molecular interactions, fractals, and emergent behavior. Engineering such systems will be an even greater challenge. This report explores four tools for engineered complex dynamic security systems: Partially Observable Markov Decision Process, Percolation Theory, Graph Theory, and Exergy/Entropy Theory. Additionally, enabling hardware technology for next generation security systems are described: a 100 node wireless sensor network, unmanned ground vehicle and unmanned aerial vehicle.
This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.
Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.
Lecture Notes in Computational Science and Engineering
Phipps, Eric; Casey, Richard; Guckenheimer, John
Periodic processes are ubiquitous in biological systems, yet modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging. Moreover, mathematical models of biological processes frequently contain many poorly-known parameters. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to evaluate derivatives accurately and efficiently for time integration, parameter sensitivities, root finding and optimization. The resulting algorithms allow periodic orbits to be computed to high accuracy using coarse discretizations. Derivative computations are carried out using a new automatic differentiation package called ADMC++ that provides derivatives and Taylor series coefficients of matrix-valued functions written in the MATLAB programming language. The algorithms are applied to a periodic orbit problem in rigid-body dynamics and a parameter estimation problem in neural oscillations.
Many large-scale computations involve a mesh and first (or sometimes higher) partial derivatives of functions of mesh elements. In principle, automatic differentiation (AD) can provide the requisite partials more efficiently and accurately than conventional finite-difference approximations. AD requires source-code modifications, which may be little more than changes to declarations. Such simple changes can easily give improved results, e.g., when Jacobian-vector products are used iteratively to solve nonlinear equations. When gradients are required (say, for optimization) and the problem involves many variables, "backward" AD in theory is very efficient, but when carried out automatically and straightforwardly, may use a prohibitive amount of memory. In this case, applying AD separately to each element function and manually assembling the gradient pieces - semiautomatic differentiation - can deliver gradients efficiently and accurately. This paper concerns on-going work; it compares several implementations of backward AD, describes a simple operator-overloading implementation specialized for gradient computations, and compares the implementations on some mesh-optimization examples. Ideas from the specialized implementation could be used in fully general source-to-source translators for C and C++.
Nanometric aluminum (123nm, spherical) was mixed with two different sieve-cut sizes of HMX (106-150 μm and 212-300 μm), and a series of gas gun tests were conducted to compare reactive wave development in pure HMX to that of aluminized HMX. In the absence of added metal, 4-mm-thick, low-density (68% of theoretical maximum density) pressings of the 106-150 μm HMX respond to modest shock loading by developing distinctive reactive waves that exhibit both temporal and meso-scale spatial fluctuations. Similar pressings of Al/HMX containing 10% aluminum (by mass) show an initial suppression of the usual wave growth seen in HMX samples. The suppression is then followed by an induction period where it is hypothesized that a phase change in the aluminum may occur. Data from VISAR, line-ORVIS, and 2-color pyrometry are given and discussed, and numerical modeling of inert sucrose is used to aid the explanation of the resulting data.
Simulations of a low-speed square cylinder wake and a supersonic axisymmetric base wake are performed using the detached eddy simulation model. A reduced-dissipation form of a shock-capturing flux scheme is employed to mitigate the effects of dissipative error in regions of smooth flow. The reduced-dissipation scheme is demonstrated on a two-dimensional square cylinder wake problem, showing a marked improvement in accuracy for a given grid resolution. The results for simulations on three grids of increasing resolution for the three-dimensional square cylinder wake are compared with experimental data and to other computational studies. The comparisons of mean flow and global flow quantities to experimental data are favorable, whereas the results for second order statistics hi the wake are mixed and do not always improve with increasing spatial resolution. Comparisons to large eddy simulation are also generally favorable, suggesting detached eddy simulation provides an adequate subgrid scale model. Predictions of base drag and centerline wake velocity for the supersonic wake are also good, given sufficient grid refinement. These cases add to the validation library for detached eddy simulation and support its use as an engineering analysis tool for accurate prediction of global flow quantities and mean flow properties.
In order to better understand how the US natural gas network might respond to disruptions, a model was created that represents the network on a regional basis. Natural gas storage for each region is represented as a stock. Transmission between each region is represented as a flow, as is natural gas production, importation, and consumption. Various disruption scenarios were run to test the robustness of the network. The system as modeled proved robust to a variety of disruption scenarios. However, a weakness of the system is that production shortfalls or interruptions cannot be replaced, and demand must therefore be reduced by the amount of the shortfall.
An oxidation treatment, often termed "pre-oxidation", is performed on austenitic stainless steel prior to joining to alkali barium silicate glass to produce hermetic seals. The resulting thin oxide acts as a transitional layer and a source of Cr and other elements which diffuse into the glass during the subsequent bonding process. Pre-oxidation is performed in a low pO2 atmosphere to avoid iron oxide formation and the final oxide is composed of Cr2O3, MnCr2O4 spinel, and SiO2. Significant heat-to-heat variations in the oxidation behavior of 304L stainless steel have been observed, which result in inconsistent glass-to-metal (GTM) seal behavior. The objectives of this work were to characterize the stainless steel pre-oxidized layer and the glass/oxide/304L interface region after glass sealing. The 304L oxidation kinetics were determined by thermogravimetric (TG) analysis and the glass/metal seals characteristics were studied using sessile drop tests, in which wetting angles were measured and glass adhesion was analyzed. The pre-oxidized layers and glass/metal interface regions were characterized using metallography, focused ion beam (FIB) sectioning, scanning and transmission electron microscopy, and electron probe microanalysis (EPMA). The results show that poor glass sealing behavior is associated with a more continuous layer of SiO 2 at the metal/oxide interface.
Gaussian processes are used as emulators for expensive computer simulations. Recently, Gaussian processes have also been used to model the "error field" or "code discrepancy" between a computer simulation code and experimental data, and the delta term between two levels of computer simulation (multi-fidelity codes). This work presents the use of Gaussian process models to approximate error or delta fields, and examines how one calculates the parameters governing the process. In multi-fidelity modeling, the delta term is used to correct a lower fidelity model to match or approximate a higher fidelity model. The terms governing the Gaussian process (e.g., the parameters of the covariance matrix) are updated using a Bayesian approach. We have found that use of Gaussian process models requires a good understanding of the method itself and an understanding of the problem in enough detail to identify reasonable covariance parameters. The methods are not "black-box" methods that can be used without some statistical understanding. However, Gaussian processes offer the ability to account for uncertainties in prediction. This approach can help reduce the number of high-fidelity function evaluations necessary in multi-fidelity optimization.
Forces generated by a static magnetic field interacting with eddy currents can provide a novel method of vibration damping. This paper discusses an experiment performed to validate modeling [3] for a case where a static magnetic field penetrates a thin sheet of conducting, non-magnetic material. When the thin sheet experiences motion, the penetrating magnetic field generates eddy currents within the sheet. These eddy currents then interact with the static field, creating magnetic forces that act on the sheet, providing damping to the sheet motion. In the presented experiment, the sheet was supported by cantilever springs attached to a frame, then excited with a vibratory shaker. The recorded motions of the sheet and the frame were used to characterize the effect of the eddy current damping.
Multiple references are often used to excite a structure in modal testing programs. This is necessary to excite all the modes and to extract accurate mode shapes when closely spaced roots are present. An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for several years. This extraction technique calculates reciprocal modal vectors based on frequency response function (FRF) measurements. SMAC was developed to accurately extract modes from structures with moderately damped modes and/or high modal density. In the past SMAC has only worked with single reference data. This paper presents an extension of SMAC to work with multiple reference data. If roots are truly perfectly repeated, the mode shapes extracted by any method will be a linear combination of the "true" shapes. However, most closely spaced roots are not perfectly repeated but have some small difference in frequency and/or damping. SMAC exploits these very small differences. The multi-reference capability of SMAC begins with an evaluation of the MMIF (Multivariate Mode Indicator Function) or CMIF (Complex Mode Indicator Function) from the starting frequency list to determine which roots are likely repeated. Several seed roots are scattered in the region of the suspected multiple roots and convergence is obtained. Mode shapes are then created from each of the references individually. The final set of mode shapes are selected based on one of three different selection techniques. Each of these is presented in this paper. SMAC has long included synthesis of FRFs and MIFs from the roots and residues to check extraction quality against the original data, but the capability to include residual effects has been minimal. Its capabilities for including residual vectors to account for out-of-band modes have now been greatly enhanced. The ability to resynthesize FRFs and mode indicator functions from the final mode shapes and residual information has also been developed. Examples are provided utilizing the SMAC package on multi-reference experimental data from two different systems.
A finite element (FE) model of a shell-payload structure is to be used to predict structural dynamic acceleration response to untestable blast environments. To understand the confidence level of these predictions, the model will be validated using test data from a blast tube experiment. The first step in validating the structural response is to validate the loading. A computational fluid dynamics (CFD) code, Saccara, was used to provide the blast tube pressure loading to the FE model. This paper describes the validation of the CFD pressure loading and its uncertainty quantification with respect to experimental pressure data obtained from geometrical mock-up structures instrumented with pressure gages in multiple nominal blast tube tests. A systematic validation approach was used from the uncertainty quantification group at Sandia National Labs. Significant effort was applied to distill the pressure loading to a small number of validation metrics important to obtaining valid final response which is in terms of acceleration shock response spectrum. Uncertainty in the pressure loading amplitude is quantified so that it can be applied to the validation blast tube test on the shell payload structure which has significant acceleration instrumentation but only a few pressure gages.
International SAMPE Symposium and Exhibition (Proceedings)
Crane, Nathan B.; Wilkes, Jan; Sachs, Emanuel; Allen, Samuel M.
This work reports on the densification of iron nanoparticles by slow drying followed by pressureless sintering. In contrast, most previous work has used high heating rates to both dry and density the nanoparticle suspension in a single step. Laser heating has been required to achieve high densities by this approach. The slow drying/pressureless sintering approach is shown to be sensitive to reactions between the particles, the stabilizing ligands, the atmosphere, and the substrate. The sintering rate of iron nanoparticles and the final composition of the deposits are significantly impacted by these interactions. However, in both the cases studied, the nanoparticles densify under pressureless sintering. When the iron nanoparticle colloid is dried in a porous steel skeleton, it is shown to increase high-temperature strength and reduce the sintering shrinkage.
Oxygen-fuel fired glass melting furnaces have successfully reduced NO x and particulate emissions and improved the furnace energy efficiency relative to the more conventional air-fuel fired technology. However, full optimisation of the oxygen/fuel approach (particularly with respect to crown refractory corrosion) is unlikely to be achieved until there is improved understanding of the effects of furnace operating conditions on alkali vaporization, batch carryover, and the formation of gaseous air pollutants in operating furnaces. In this investigation, continuous online measurements of alkali concentration (by laser induced breakdown spectroscopy) were coupled with measurements of the flue gas composition in the exhaust of an oxygen/natural gas fired container glass furnace. The burner stoichiometry was purposefully varied while maintaining normal glass production. The data demonstrate that alkali vaporization and SO2 release increase as the oxygen concentration in the exhaust decreases. NOx emissions showed a direct correlation with the flow rate of infiltrated air into the combustion space. The extent of batch carryover was primarily affected by variations in the furnace differential pressure. The furnace temperature did not vary significantly during the measurement campaign, so no clear correlation could be obtained between the available measurements of furnace temperature and alkali vaporization.
Techniques to ensure shock data quality and to recognize bad data are discussed in this paper. For certain shock environments, acceleration response up to ten kHz is desired for structural model validation purposes. The validity and uncertainty associated with the experimental data need to be known in order to use it effectively in model validation. In some cases the frequency content of impulsive or pyrotechnic loading or metal to metal contact of joints in the structure may excite accelerometer resonances at hundreds of kHz. The piezoresistive accelerometers often used to measure such events can provide unreliable data depending on the level and frequency content of the shock. The filtered acceleration time history may not reveal that the data are unreliable. Some data validity considerations include accelerometer mounting systems, sampling rates, band-edge settings, peak acceleration specifications, signal conditioning bandwidth, accelerometer mounted resonance and signal processing checks. One approach for uncertainty quantification of the sensors, signal conditioning and data acquisition system is also explained.