Publications

9 Results

Search results

Jump to search filters

An analysis of the survivability of sensor darts in impacts with trees

Gardner, David R.

A methodology was developed for computing the probability that the sensor dart for the 'Near Real-Time Site Characterization for Assured HDBT Defeat' Grand-Challenge LDRD project will survive deployment over a forested region. The probability can be decomposed into three approximately independent probabilities that account for forest coverage, branch density and the physics of an impact between the dart and a tree branch. The probability that a dart survives an impact with a tree branch was determined from the deflection induced by the impact. If a dart that was deflected so that it impacted the ground at an angle of attack exceeding a user-specified, threshold value, the dart was assumed to not survive the impact with the branch; otherwise it was assumed to have survived. A computer code was developed for calculating dart angle of attack at impact with the ground and a Monte Carlo scheme was used to calculate the probability distribution of a sensor dart surviving an impact with a branch as a function of branch radius, length, and height from the ground. Both an early prototype design and the current dart design were used in these studies. As a general rule of thumb, it we observed that for reasonably generic trees and for a threshold angle of attack of 5{sup o} (which is conservative for dart survival), the probability of reaching the ground with an angle of attack less than the threshold is on the order of 30% for the prototype dart design and 60% for the current dart design, though these numbers should be treated with some caution.

More Details

On Developing a Multifidelity Modeling Algorithm for System-Level Engineering Analysis

Gardner, David R.; Hennigan, Gary L.

Multifidelity modeling, in which one component of a system is modeled at a significantly different level of fidelity than another, has several potential advantages. For example, a higher-fidelity component model can be evaluated in the context of a lower-fidelity full system model that provides more realistic boundary conditions and yet can be executed quickly enough for rapid design changes or design optimization. Developing such multifidelity models presents challenges in several areas, including coupling models with differing spatial dimensionalities. In this report we describe a multifidelity algorithm for thermal radiation problems in which a three-dimensional, finite-element model of a system component is embedded in a system of zero-dimensional (lumped-parameter) components. We tested the algorithm on a prototype system with three problems: heating to a constant temperature, cooling to a constant temperature, and a simulated fire environment. The prototype system consisted of an aeroshell enclosing three components, one of which was represented by a three-dimensional finite-element model. We tested two versions of the algorithm; one used the surface-average temperature of the three dimensional component to couple it to the system model, and the other used the volume-average temperature. Using the surface-average temperature provided somewhat better temperature predictions than using the volume-average temperature. Our results illustrate the difficulty in specifying consistency for multifidelity models. In particular, we show that two models may be consistent for one application but not for another. While the temperatures predicted by the multifidelity model were not as accurate as those predicted by a full three-dimensional model, our results show that a multifidelity system model can potentially execute much faster than a full three-dimensional finite-element model for thermal radiation problems with sufficient accuracy for some applications, while still predicting internal temperatures for the higher fidelity component. These results indicate that optimization studies with mixed-fidelity models are feasible when they may not be feasible with three-dimensional system models, if the concomitant loss in accuracy is within acceptable bounds.

More Details

On the Development of a Java-Based Tool for Multifidelity Modeling of Coupled Systems: LDRD Final Report

Gardner, David R.; Castro, Joseph P.; Hennigan, Gary L.; Gonzales, Mark A.; Young, Michael F.

This report describes research and development of methods to couple vastly different subsystems and physical models and to encapsulate these methods in a Java{trademark}-based framework. The work described here focused on developing a capability to enable design engineers and safety analysts to perform multifidelity, multiphysics analyses more simply. In particular this report describes a multifidelity algorithm for thermal radiative heat transfer and illustrates its performance. Additionally, it describes a module-based computer software architecture that facilitates multifidelity, multiphysics simulations. The architecture is currently being used to develop an environment for modeling the effects of radiation on electronic circuits in support of the FY 2003 Hostile Environments Milestone for the Accelerated Strategic Computing Initiative.

More Details

Developing an Event-Driven Generator for User Interfaces in the Entero Software

Gardner, David R.

The Entero Software Project emphasizes flexibility, integration and scalability in modeling complex engineering systems. The GUIGenerator project supports the Entero environment by providing a user-friendly graphical representation of systems, mutable at runtime. The first phase requires formal language specification describing the syntax and semantics of extensible Markup Language (XML) elements to he utilized, depicted through an XML schema. Given a system, front end user interaction with stored system data occurs through Java Graphical User Interfaces (GUIs), where often only subsets of system data require user input. The second phase demands interpreting well-formed XML documents into predefined graphical components, including the addition of fixed components not represented in systems such as buttons. The conversion process utilizes the critical features of JDOM, a Java based XML parser, and Core Java Reflection, an advanced Java feature that generates objects at runtime using XML input data. Finally, a searching mechanism provides the capability of referencing specific system components through a combination of established search engine techniques and regular expressions, useful for altering visual properties of output. The GUIGenerator will be used to create user interfaces for the Entero environment's code coupling in support of the ASCI Hostile Environments Level 2 milestones in 2003.

More Details

The Optimization of a Shaped-Charge Design Using Parallel Computers

Gardner, David R.; Vaughan, Courtenay T.

Current supercomputers use large parallel arrays of tightly coupled processors to achieve levels of performance far surpassing conventional vector supercomputers. Shock-wave physics codes have been developed for these new supercomputers at Sandia National Laboratories and elsewhere. These parallel codes run fast enough on many simulations to consider using them to study the effects of varying design parameters on the performance of models of conventional munitions and other complex systems. Such studies maybe directed by optimization software to improve the performance of the modeled system. Using a shaped-charge jet design as an archetypal test case and the CTH parallel shock-wave physics code controlled by the Dakota optimization software, we explored the use of automatic optimization tools to optimize the design for conventional munitions. We used a scheme in which a lower resolution computational mesh was used to identify candidate optimal solutions and then these were verified using a higher resolution mesh. We identified three optimal solutions for the model and a region of the design domain where the jet tip speed is nearly optimal, indicating the possibility of a robust design. Based on this study we identified some of the difficulties in using high-fidelity models with optimization software to develop improved designs. These include developing robust algorithms for the objective function and constraints and mitigating the effects of numerical noise in them. We conclude that optimization software running high-fidelity models of physical systems using parallel shock wave physics codes to find improved designs can be a valuable tool for designers. While current state of algorithm and software development does not permit routine, ''black box'' optimization of designs, the effort involved in using the existing tools may well be worth the improvement achieved in designs.

More Details

Transient Solid Dynamics Simulations on the Sandia/Intel Teraflop Computer

Gardner, David R.

Transient solid dynamics simulations are among the most widely used engineering calculations. Industrial applications include vehicle crashworthiness studies, metal forging, and powder compaction prior to sintering. These calculations are also critical to defense applications including safety studies and weapons simulations. The practical importance of these calculations and their computational intensiveness make them natural candidates for parallelization. This has proved to be difficult, and existing implementations fail to scale to more than a few dozen processors. In this paper we describe our parallelization of PRONTO, Sandia`s transient solid dynamics code, via a novel algorithmic approach that utilizes multiple decompositions for different key segments of the computations, including the material contact calculation. This latter calculation is notoriously difficult to perform well in parallel, because it involves dynamically changing geometry, global searches for elements in contact, and unstructured communications among the compute nodes. Our approach scales to at least 3600 compute nodes of the Sandia/Intel Teraflop computer (the largest set of nodes to which we have had access to date) on problems involving millions of finite elements. On this machine we can simulate models using more than ten- million elements in a few tenths of a second per timestep, and solve problems more than 3000 times faster than a single processor Cray Jedi.

More Details

The development and performance of a message-passing version of the PAGOSA shock-wave physics code

Gardner, David R.

A message-passing version of the PAGOSA shock-wave physics code has been developed at Sandia National Laboratories for multiple-instruction, multiple-data stream (MIMD) computers. PAGOSA is an explicit, Eulerian code for modeling the three-dimensional, high-speed hydrodynamic flow of fluids and the dynamic deformation of solids under high rates of strain. It was originally developed at Los Alamos National Laboratory for the single-instruction, multiple-data (SIMD) Connection Machine parallel computers. The performance of Sandia`s message-passing version of PAGOSA has been measured on two MIMD machines, the nCUBE 2 and the Intel Paragon XP/S. No special efforts were made to optimize the code for either machine. The measured scaled speedup (computational time for a single computational node divided by the computational time per node for fixed computational load) and grind time (computational time per cell per time step) show that the MIMD PAGOSA code scales linearly with the number of computational nodes used on a variety of problems, including the simulation of shaped-charge jets perforating an oil well casing. Scaled parallel efficiencies for MIMD PAGOSA are greater than 0.70 when the available memory per node is filled (or nearly filled) on hundreds to a thousand or more computational nodes on these two machines, indicating that the code scales very well. Thus good parallel performance can be achieved for complex and realistic applications when they are first implemented on MIMD parallel computers.

More Details

Near-field dispersal modeling for liquid fuel-air explosives

Gardner, David R.

The near-field, explosive dispersal of a liquid into air has been explored using a combination of analytical and numerical models. The near-field flow regime is transient, existing only as long as the explosive forces produced by the detonation of the burster charge dominate or are approximately equal in magnitude to the aerodynamic drag forces on the liquid. The near-field model provides reasonable initial conditions for the far-field model, which is described in a separate report. The near-field model consists of the CTH hydrodynamics code and a film instability model. In particular, the CTH hydrodynamics code is used to provide initial temperature, pressure, and velocity fields, and bulk material distribution for the far-field model. The film instability model is a linear stability model for a radially expanding fluid film, and is used to provide a lower bound on the breakup time and an upper and lower bound on the initial average drop diameter for the liquid following breakup. Predictions of the liquid breakup time and the initial arithmetic average drop diameter from the model compare favorably with the sparse experimental data. 26 refs., 20 figs., 8 tabs.

More Details
9 Results
9 Results