Publications

Results 9901–9950 of 9,998

Search results

Jump to search filters

Experiments on Adaptive Techniques for Host-Based Intrusion Detection

Draelos, Timothy J.; Collins, Michael J.; Duggan, David P.; Thomas, Edward V.

This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.

More Details

Description of the Sandia Validation Metrics Project

Trucano, Timothy G.; Easterling, Robert G.; Dowding, Kevin J.; Paez, Thomas L.; Urbina, Angel U.; Romero, Vicente J.; Rutherford, Brian M.; Hills, Richard G.

This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.

More Details

Scalable rendering on PC clusters

IEEE Computer Graphics and Applications

Wylie, Brian N.; Pavlakos, Constantine P.; Lewis, Vasily L.; Moreland, Kenneth D.

In order to achieve higher rendering performance, the use of parallel sort-last architecture on a PC cluster is presented. The sort-last library (libpglc) can be linked to an existing parallel application to achieve high rendering rates. The efficient use of 64 commodity graphics cards enables to establish pace-setting rendering performance of 300 million triangles per second on extremely large data.

More Details

Characterization of UOP IONSIV IE-911

Nyman, M.; Nenoff, T.M.; Headley, Thomas J.

As a participating national lab in the inter-institutional effort to resolve performance issues of the non-elutable ion exchange technology for Cs extraction, they have carried out a series of characterization studies of UOP IONSIV{reg_sign} IE-911 and its component parts. IE-911 is a bound form (zirconium hydroxide-binder) of crystalline silicotitanate (CST) ion exchanger. The crystalline silicotitanate removes Cs from solutions by selective ion exchange. The performance issues of primary concern are: (1) excessive Nb leaching and subsequent precipitation of column-plugging Nb-oxide material, and (2) precipitation of aluminosilicate on IE-911 pellet surfaces, which may be initiated by dissolution of Si from the IE-911, thus creating a supersaturated solution with respect to silica. In this work, they have identified and characterized Si- and Nb-oxide based impurity phases in IE-911, which are the most likely sources of leachable Si and Nb, respectively. Furthermore, they have determined the criteria and mechanism for removal from IE-911 of the Nb-based impurity phase that is responsible for the Nb-oxide column plugging incidents.

More Details

Quadratic Reciprocity and the Group Orders of Particle States

Wagner, John S.

The construction of inverse states in a finite field F{sub P{sub P{alpha}}} enables the organization of the mass scale by associating particle states with residue class designations. With the assumption of perfect flatness ({Omega}total = 1.0), this approach leads to the derivation of a cosmic seesaw congruence which unifies the concepts of space and mass. The law of quadratic reciprocity profoundly constrains the subgroup structure of the multiplicative group of units F{sub P{sub {alpha}}}* defined by the field. Four specific outcomes of this organization are (1) a reduction in the computational complexity of the mass state distribution by a factor of {approximately}10{sup 30}, (2) the extension of the genetic divisor concept to the classification of subgroup orders, (3) the derivation of a simple numerical test for any prospective mass number based on the order of the integer, and (4) the identification of direct biological analogies to taxonomy and regulatory networks characteristic of cellular metabolism, tumor suppression, immunology, and evolution. It is generally concluded that the organizing principle legislated by the alliance of quadratic reciprocity with the cosmic seesaw creates a universal optimized structure that functions in the regulation of a broad range of complex phenomena.

More Details

Determination of Supersymmetric Particle Masses and Attributes with Genetic Divisors

Wagner, John S.

Arithmetic conditions relating particle masses can be defined on the basis of (1) the supersymmetric conservation of congruence and (2) the observed characteristics of particle reactions and stabilities. Stated in the form of common divisors, these relations can be interpreted as expressions of genetic elements that represent specific particle characteristics. In order to illustrate this concept, it is shown that the pion triplet ({pi}{sup {+-}}, {pi}{sup 0}) can be associated with the existence of a greatest common divisor d{sub 0{+-}} in a way that can account for both the highly similar physical properties of these particles and the observed {pi}{sup {+-}}/{pi}{sup 0} mass splitting. These results support the conclusion that a corresponding statement holds generally for all particle multiplets. Classification of the respective physical states is achieved by assignment of the common divisors to residue classes in a finite field F{sub P{sub {alpha}}} and the existence of the multiplicative group of units F{sub P{sub {alpha}}} enables the corresponding mass parameters to be associated with a rich subgroup structure. The existence of inverse states in F{sub P{sub {alpha}}} allows relationships connecting particle mass values to be conveniently expressed in a form in which the genetic divisor structure is prominent. An example is given in which the masses of two neutral mesons (K{degree} {r_arrow} {pi}{degree}) are related to the properties of the electron (e), a charged lepton. Physically, since this relationship reflects the cascade decay K{degree} {r_arrow} {pi}{degree} + {pi}{degree}/{pi}{degree} {r_arrow} e{sup +} + e{sup {minus}}, in which a neutral kaon is converted into four charged leptons, it enables the genetic divisor concept, through the intrinsic algebraic structure of the field, to provide a theoretical basis for the conservation of both electric charge and lepton number. It is further shown that the fundamental source of supersymmetry can be expressed in terms of hierarchical relationships between odd and even order subgroups of F{sub P{sub {alpha}}}, an outcome that automatically reflects itself in the phenomenon of fermion/boson pairing of individual particle systems. Accordingly, supersymmetry is best represented as a group rather than a particle property. The status of the Higgs subgroup of order 4 is singular; it is isolated from the hierarchical pattern and communicates globally to the mass scale through the seesaw congruence by (1) fusing the concepts of mass and space and (2) specifying the generators of the physical masses.

More Details

Programming Paradigms for Massively Parallel Computers: LDRD Project Final Report

Brightwell, Ronald B.

This technical report presents the initial proposal and renewable proposals for an LDRD project whose intended goal was to enable applications to take full advantage of the hardware available on Sandia's current and future massively parallel supercomputers by analyzing various ways of combining distributed-memory and shared-memory programming models. Despite Sandia's enormous success with distributed-memory parallel machines and the message-passing programming model, clusters of shared-memory processors appeared to be the massively parallel architecture of the future at the time this project was proposed. They had hoped to analyze various hybrid programming models for their effectiveness and characterize the types of application to which each model was well-suited. The report presents the initial research proposal and subsequent continuation proposals that highlight the proposed work and summarize the accomplishments.

More Details

Superresolution and Synthetic Aperture Radar

Dickey, Fred M.; Romero, L.A.; Doerry, Armin

Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. The application of the concept to synthetic aperture radar is investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. A criterion for judging superresolution processing of an image is presented.

More Details

Effect of pressure, membrane thickness, and placement of control volumes on the flux of methane through thin silicalite membranes: A dual control volume grand canonical molecular dynamics study

Journal of Chemical Physics

Martin, Marcus G.; Thompson, Aidan P.; Nenoff, T.M.

Dual control volume molecular dynamics was employed to study the flux of methane through channels of thin silicalite membranes. The DCANIS force field was analyzed to describe the adsorption isotherms of methane and ethane in silicalite. The alkane parameters and silicalite parameters were determined by fiiting the DCANIS force field to single-component vapor-liquid coexistence curves (VLCC) and adsorption isotherms respectively. The adsorption layers on the surfaces of thin silicalite membranes showed a sifnificant resistance to the flux of methane. The results depicted the insensitivity of permeance to both the average pressure and pressure drop.

More Details

ACME Algorithms for Contact in a Multiphysics Environment API Version 0.3a

Brown, Kevin H.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin W.; Jones, Reese E.; Summers, Randall M.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Reference Manual

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

Fast through-bond diffusion of nitrogen in silicon

Applied Physics Letters

Schultz, Peter A.

We report first-principles total energy calculations of interaction of nitrogen in silicon with silicon self-interstitials. Substitutional nitrogen captures a silicon interstitial with 3.5 eV binding energy forming a (100) split interstitial ground-state geometry, with the nitrogen forming three bonds. The low-energy migration path is through a bond bridge state having two bonds. Fast diffusion of nitrogen occurs through a pure interstitialcy mechanism: the nitrogen never has less than two bonds. Near-zero formation energy of the nitrogen interstitialcy with respect to the substitutional rationalizes the low solubility of substitutional nitrogen in silicon. © 2001 American Institute of Physics.

More Details

Gridless Compressible Flow: A White Paper

Strickland, James H.

In this paper the development of a gridless method to solve compressible flow problems is discussed. The governing evolution equations for velocity divergence {delta}, vorticity {omega}, density {rho}, and temperature T are obtained from the primitive variable Navier-Stokes equations. Simplifications to the equations resulting from assumptions of ideal gas behavior, adiabatic flow, and/or constant viscosity coefficients are given. A general solution technique is outlined with some discussion regarding alternative approaches. Two radial flow model problems are considered which are solved using both a finite difference method and a compressible particle method. The first of these is an isentropic inviscid 1D spherical flow which initially has a Gaussian temperature distribution with zero velocity everywhere. The second problem is an isentropic inviscid 2D radial flow which has an initial vorticity distribution with constant temperature everywhere. Results from the finite difference and compressible particle calculations are compared in each case. A summary of the results obtained herein is given along with recommendations for continuing the work.

More Details

Collaborative evaluation of early design decisions and product manufacturability

Proceedings of the Hawaii International Conference on System Sciences

Kleban, S.D.; Stubblefield, W.A.; Mitchiner, K.W.; Mitchiner, John L.; Arms, Robert M.

In manufacturing, the conceptual design and detailed design stages are typically regarded as sequential and distinct. Decisions made in conceptual design are often made with little information as to how they would affect detailed design or manufacturing process specification. Many possibilities and unknowns exist in conceptual design where ideas about product shape and functionality are changing rapidly. Few if any tools exist to aid in this difficult, amorphous stage in contrast to the many CAD and analysis tools for detailed design where much more is known about the final product. The Materials Process Design Environment (MPDE) is a collaborative problem solving environment (CPSE) that was developed so geographically dispersed designers in both the conceptual and detailed stage can work together and understand the impacts of their design decisions on functionality, cost and manufacturability.

More Details

Peer Review Process for the Sandia ASCI V and V Program: Version 1.0

Pilch, Martin P.; Trucano, Timothy G.; Peercy, David E.; Hodges, Ann L.; Young, Eunice R.; Moya, Jaime L.

This report describes the initial definition of the Verification and Validation (V and V) Plan Peer Review Process at Sandia National Laboratories. V and V peer review at Sandia is intended to assess the ASCI code team V and V planning process and execution. Our peer review definition is designed to assess the V and V planning process in terms of the content specified by the Sandia Guidelines for V and V plans. Therefore, the peer review process and process for improving the Guidelines are necessarily synchronized, and form parts of a larger quality improvement process supporting the ASCI V and V program at Sandia.

More Details

Hexahedral Mesh Untangling

Engineering with Computers

Knupp, Patrick K.

We investigate a well-motivated mesh untangling objective function whose optimization automatically produces non-inverted elements when possible. Examples show the procedure is highly effective on simplicial meshes and on non-simplicial (e.g., hexahedral) meshes constructed via mapping or sweeping algorithms. The current whisker-weaving (WW) algorithm in CUBIT usually produces hexahedral meshes that are unsuitable for analyses due to inverted elements. The majority of these meshes cannot be untangled using the new objective function. The most likely source of the difficulty is poor mesh topology.

More Details

Experimental Results on Statistical Approaches to Page Replacement Policies

Leung, Vitus J.

This paper investigates the questions of what statistical information about a memory request sequence is useful to have in making page replacement decisions: Our starting point is the Markov Request Model for page request sequences. Although the utility of modeling page request sequences by the Markov model has been recently put into doubt, we find that two previously suggested algorithms (Maximum Hitting Time and Dominating Distribution) which are based on the Markov model work well on the trace data used in this study. Interestingly, both of these algorithms perform equally well despite the fact that the theoretical results for these two algorithms differ dramatically. We then develop succinct characteristics of memory access patterns in an attempt to approximate the simpler of the two algorithms. Finally, we investigate how to collect these characteristics in an online manner in order to have a purely online algorithm.

More Details

The Xyce Parallel Electronic Simulator - An Overview

Hutchinson, Scott A.; Keiter, Eric R.; Hoekstra, Robert J.; Watts, Herman A.; Waters, Lon J.; Schells, Regina L.; Wix, Steven D.

The Xyce{trademark} Parallel Electronic Simulator has been written to support the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on providing the capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). In addition, they are providing improved performance for numerical kernels using state-of-the-art algorithms, support for modeling circuit phenomena at a variety of abstraction levels and using object-oriented and modern coding-practices that ensure the code will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows.

More Details

Failure analysis of tungsten coated polysilicon micromachined microengines

Proceedings of SPIE - The International Society for Optical Engineering

Walraven, J.A.; Mani, Seethambal S.; Fleming, J.G.; Headley, Thomas J.; Kotula, Paul G.; Pimentel, Alejandro A.; Rye, Michael J.; Tanner, Danelle M.; Smith, Norman F.

Failure analysis (FA) tools have been applied to analyze tungsten coated polysilicon microengines. These devices were stressed under accelerated conditions at ambient temperatures and pressure. Preliminary results illustrating the failure modes of microengines operated under variable humidity and ultra-high drive frequency will also be shown. Analysis of tungsten coated microengines revealed the absence of wear debris in microengines operated under ambient conditions. Plan view imaging of these microengines using scanning electron microscopy (SEM) revealed no accumulation of wear debris on the surface of the gears or ground plane on microengines operated under standard laboratory conditions. Friction bearing surfaces were exposed and analyzed using the focused ion beam (FIB). These cross sections revealed no accumulation of debris along friction bearing surfaces. By using transmission electron microscopy (TEM) in conjunction with electron energy loss spectroscopy (EELS), we were able to identify the thickness, elemental analysis, and crystallographic properties of tungsten coated MEMS devices. Atomic force microscopy was also utilized to analyze the surface roughness of friction bearing surfaces.

More Details

Aspen-EE: An Agent-Based Model of Infrastructure Interdependency

Barton, Dianne C.; Eidson, Eric D.; Schoenwald, David A.; Stamber, Kevin L.; Reinert, Rhonda K.

This report describes the features of Aspen-EE (Electricity Enhancement), a new model for simulating the interdependent effects of market decisions and disruptions in the electric power system on other critical infrastructures in the US economy. Aspen-EE extends and modifies the capabilities of Aspen, an agent-based model previously developed by Sandia National Laboratories. Aspen-EE was tested on a series of scenarios in which the rules governing electric power trades were changed. Analysis of the scenario results indicates that the power generation company agents will adjust the quantity of power bid into each market as a function of the market rules. Results indicate that when two power markets are faced with identical economic circumstances, the traditionally higher-priced market sees its market clearing price decline, while the traditionally lower-priced market sees a relative increase in market clearing price. These results indicate that Aspen-EE is predicting power market trends that are consistent with expected economic behavior.

More Details

PICO: An Object-Oriented Framework for Branch and Bound

Hart, William E.; Phillips, Cynthia A.

This report describes the design of PICO, a C++ framework for implementing general parallel branch-and-bound algorithms. The PICO framework provides a mechanism for the efficient implementation of a wide range of branch-and-bound methods on an equally wide range of parallel computing platforms. We first discuss the basic architecture of PICO, including the application class hierarchy and the package's serial and parallel layers. We next describe the design of the serial layer, and its central notion of manipulating subproblem states. Then, we discuss the design of the parallel layer, which includes flexible processor clustering and communication rates, various load balancing mechanisms, and a non-preemptive task scheduler running on each processor. We describe the application of the package to a branch-and-bound method for mixed integer programming, along with computational results on the ASCI Red massively parallel computer. Finally we describe the application of the branch-and-bound mixed-integer programming code to a resource constrained project scheduling problem for Pantex.

More Details

ATR2000 Mercury/MPI Real-Time ATR System User's Guide

Meyer, Richard H.; Doerfler, Douglas W.

The Air Force's Electronic Systems Center has funded Sandia National Laboratories to develop an Automatic Target Recognition (ATR) System for the Air Force's Joint STARS platform using Mercury Computer systems hardware. This report provides general theory on the internal operations of the Real-Time ATR system and provides some basic techniques that can be used to reconfigure the system and monitor its runtime operation. In addition, general information on how to interface an image formation processor and a human machine interface to the ATR is provided. This report is not meant to be a tutorial on the ATR algorithms.

More Details

Visualization of Information Spaces with VxInsight

Wylie, Brian N.; Boyack, Kevin W.; Davidson, George S.

VxInsight provides a visual mechanism for browsing, exploring and retrieving information from a database. The graphical display conveys information about the relationship between objects in several ways and on multiple scales. In this way, individual objects are always observed within a larger context. For example, consider a database consisting of a set of scientific papers. Imagine that the papers have been organized in a two dimensional geometry so that related papers are located close to each other. Now construct a landscape where the altitude reflects the local density of papers. Papers on physics will form a mountain range, and a different range will stand over the biological papers. In between will be research reports from biophysics and other bridging disciplines. Now, imagine exploring these mountains. If we zoom in closer, the physics mountains will resolve into a set of sub-disciplines. Eventually, by zooming in far enough, the individual papers become visible. By pointing and clicking you can learn more about papers of interest or retrieve their full text. Although physical proximity conveys a great deal of information about the relationship between documents, you can also see which papers reference which others, by drawing lines between the citing and cited papers. For even more information, you can choose to highlight papers by a particular researcher or a particular institution, or show the accumulation of papers through time, watching some disciplines explode and other stagnate. VxInsight is a general purpose tool, which enables this kind of interaction with wide variety of relational data: documents, patents, web pages, and financial transactions are just a few examples. The tool allows users to interactively browse, explore and retrieve information from the database in an intuitive way.

More Details

The Consistent Kinetics Porosity (CKP) Model: A Theory for the Mechanical Behavior of Moderately Porous Solids

Brannon, Rebecca M.

A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion sequel report, the CKP model is capable of closely matching plate impact measurements for porous materials.

More Details

VFLOW2D - A Vorte-Based Code for Computing Flow Over Elastically Supported Tubes and Tube Arrays

Wolfe, Walter P.; Strickland, James H.; Homicz, Gregory F.; Gossler, A.A.

A numerical flow model is developed to simulate two-dimensional fluid flow past immersed, elastically supported tube arrays. This work is motivated by the objective of predicting forces and motion associated with both deep-water drilling and production risers in the oil industry. This work has other engineering applications including simulation of flow past tubular heat exchangers or submarine-towed sensor arrays and the flow about parachute ribbons. In the present work, a vortex method is used for solving the unsteady flow field. This method demonstrates inherent advantages over more conventional grid-based computational fluid dynamics. The vortex method is non-iterative, does not require artificial viscosity for stability, displays minimal numerical diffusion, can easily treat moving boundaries, and allows a greatly reduced computational domain since vorticity occupies only a small fraction of the fluid volume. A gridless approach is used in the flow sufficiently distant from surfaces. A Lagrangian remap scheme is used near surfaces to calculate diffusion and convection of vorticity. A fast multipole technique is utilized for efficient calculation of velocity from the vorticity field. The ability of the method to correctly predict lift and drag forces on simple stationary geometries over a broad range of Reynolds numbers is presented.

More Details

Digitally Marking RSA Moduli

Johnston, Anna M.

The moduli used in RSA (see [5]) can be generated by many different sources. The generator of that modulus (assuming a single entity generates the modulus) knows its factorization. They would have the ability to forge signatures or break any system based on this moduli. If a moduli and the RSA parameters associated with it were generated by a reputable source, the system would have higher value than if the parameters were generated by an unknown entity. So for tracking, security, confidence and financial reasons it would be beneficial to know who the generator of the RSA modulus was. This is where digital marking comes in. An RSA modulus ia digitally marked, or digitally trade marked, if the generator and other identifying features of the modulus (such as its intended user, the version number, etc.) can be identified and possibly verified by the modulus itself. The basic concept of digitally marking an RSA modulus would be to fix the upper bits of the modulus to this tag. Thus anyone who sees the public modulus can tell who generated the modulus and who the generator believes the intended user/owner of the modulus is.

More Details

Methods for Multisweep Automation

Shepherd, Jason F.; Mitchell, Scott A.; Knupp, Patrick K.; White, David R.

Sweeping has become the workhorse algorithm for creating conforming hexahedral meshes of complex models. This paper describes progress on the automatic, robust generation of MultiSwept meshes in CUBIT. MultiSweeping extends the class of volumes that may be swept to include those with multiple source and multiple target surfaces. While not yet perfect, CUBIT's MultiSweeping has recently become more reliable, and been extended to assemblies of volumes. Sweep Forging automates the process of making a volume (multi) sweepable: Sweep Verification takes the given source and target surfaces, and automatically classifies curve and vertex types so that sweep layers are well formed and progress from sources to targets.

More Details

Compact vs. Exponential-Size LP Relaxations

Carr, Robert D.; Lancia, G.

In this paper we introduce by means of examples a new technique for formulating compact (i.e. polynomial-size) LP relaxations in place of exponential-size models requiring separation algorithms. In the same vein as a celebrated theorem by Groetschel, Lovasz and Schrijver, we state the equivalence of compact separation and compact optimization. Among the examples used to illustrate our technique, we introduce a new formulation for the Traveling Salesman Problem, whose relaxation we show equivalent to the subtour elimination relaxation.

More Details

On the Development of a Gridless Inflation Code for Parachute Simulations

Strickland, James H.; Homicz, Gregory F.; Gossler, A.A.; Wolfe, Walter P.; Porter, V.L.

In this paper the authors present the current status of an unsteady 3D parachute simulation code which is being developed at Sandia National Laboratories under the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The Vortex Inflation PARachute code (VIPAR) which embodies this effort will eventually be able to perform complete numerical simulations of ribbon parachute deployment, inflation, and steady descent. At the present time they have a working serial version of the uncoupled fluids code which can simulate unsteady 3D incompressible flows around bluff bodies made up of triangular membrane elements. A parallel version of the code has just been completed which will allow one to compute flows over complex geometries utilizing several thousand processors on one of the new DOE teraFLOP computers.

More Details

Massively Parallel Direct Simulation of Multiphase Flow

Cook, Benjamin K.; Preece, Dale S.

The authors understanding of multiphase physics and the associated predictive capability for multi-phase systems are severely limited by current continuum modeling methods and experimental approaches. This research will deliver an unprecedented modeling capability to directly simulate three-dimensional multi-phase systems at the particle-scale. The model solves the fully coupled equations of motion governing the fluid phase and the individual particles comprising the solid phase using a newly discovered, highly efficient coupled numerical method based on the discrete-element method and the Lattice-Boltzmann method. A massively parallel implementation will enable the solution of large, physically realistic systems.

More Details

Algorithmic Strategies in Combinatorial Chemistry

Istrail, Sorin I.; Womble, David E.

Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

More Details

Radiation in an Emitting and Absorbing Medium: A Gridless Approach

Numerical Heat Transfer, Part B

Gritzo, Louis A.; Strickland, James H.; DesJardin, Paul E.

A gridless technique for the solution of the integral form of the radiative heat flux equation for emitting and absorbing media is presented. Treatment of non-uniform absorptivity and gray boundaries is included. As part of this work, the authors have developed fast multipole techniques for extracting radiative heat flux quantities from the temperature fields of one-dimensional and three-dimensional geometries. Example calculations include those for one-dimensional radiative heat transfer through multiple flame sheets, a three-dimensional enclosure with black walls, and an axisymmetric enclosure with black walls.

More Details
Results 9901–9950 of 9,998
Results 9901–9950 of 9,998