Publications

Results 87076–87100 of 99,299

Search results

Jump to search filters

Policy based network management : state of the industry and desired functionality for the enterprise network: security policy / testing technology evaluation

Keliiaa, Curtis M.; Tolendino, Lawrence F.; Taylor, Jeffrey L.; Macalpine, Timothy L.; Morgan, Christine A.

Policy-based network management (PBNM) uses policy-driven automation to manage complex enterprise and service provider networks. Such management is strongly supported by industry standards, state of the art technologies and vendor product offerings. We present a case for the use of PBNM and related technologies for end-to-end service delivery. We provide a definition of PBNM terms, a discussion of how such management should function and the current state of the industry. We include recommendations for continued work that would allow for PBNM to be put in place over the next five years in the unclassified environment.

More Details

ALEGRA-HEDP : version 4.6

Brunner, Thomas A.; Garasi, Christopher J.; Haill, Thomas A.; Mehlhorn, Thomas A.; Robinson, Allen C.; Summers, Randall M.

ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation in inviscid fluids and solids. This document describes user options for modeling resistive magnetohydrodynamics, thermal conduction, and radiation transport effects, and two material temperature physics.

More Details

Uniaxial and triaxial compression tests of silicon carbide ceramics under quasi-static loading condition

Brannon, Rebecca M.; Bronowski, David R.

To establish mechanical properties and failure criteria of silicon carbide (SiC-N) ceramics, a series of quasi-static compression tests has been completed using a high-pressure vessel and a unique sample alignment jig. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Results from the uniaxial and triaxial compression tests established the failure threshold for the SiC-N ceramics in terms of stress invariants (I{sub 1} and J{sub 2}) over the range 1246 < I{sub 1} < 2405. In this range, results are fitted to the following limit function (Fossum and Brannon, 2004) {radical}J{sub 2}(MPa) = a{sub 1} - a{sub 3}e -a{sub 2}(I{sub 1}/3) + a{sub 4} I{sub 1}/3, where a{sub 1} = 10181 MPa, a{sub 2} = 4.2 x 10{sup -4}, a{sub 3} = 11372 MPa, and a{sub 4} = 1.046. Combining these quasistatic triaxial compression strength measurements with existing data at higher pressures naturally results in different values for the least-squares fit to this function, appropriate over a broader pressure range. These triaxial compression tests are significant because they constitute the first successful measurements of SiC-N compressive strength under quasistatic conditions. Having an unconfined compressive strength of {approx}3800 MPa, SiC-N has been heretofore tested only under dynamic conditions to achieve a sufficiently large load to induce failure. Obtaining reliable quasi-static strength measurements has required design of a special alignment jig and load-spreader assembly, as well as redundant gages to ensure alignment. When considered in combination with existing dynamic strength measurements, these data significantly advance the characterization of pressure-dependence of strength, which is important for penetration simulations where failed regions are often at lower pressures than intact regions.

More Details

3rd Tech DeltaSphere-3000 Laser 3D Scene Digitizer infrared laser scanner hazard analysis

Augustoni, Arnold L.

A laser hazard analysis and safety assessment was performed for the 3rd Tech model DeltaSphere-3000{reg_sign} Laser 3D Scene Digitizer, infrared laser scanner model based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers. The portable scanner system is used in the Robotic Manufacturing Science and Engineering Laboratory (RMSEL). This scanning system had been proposed to be a demonstrator for a new application. The manufacture lists the Nominal Ocular Hazard Distance (NOHD) as less than 2 meters. It was necessary that SNL validate this NOHD prior to its use as a demonstrator involving the general public. A formal laser hazard analysis is presented for the typical mode of operation for the current configuration as well as a possible modified mode and alternative configuration.

More Details

SIERRA framework version 4 : solver services

Williams, Alan B.

Several SIERRA applications make use of third-party libraries to solve systems of linear and nonlinear equations, and to solve eigenproblems. The classes and interfaces in the SIERRA framework that provide linear system assembly services and access to solver libraries are collectively referred to as solver services. This paper provides an overview of SIERRA's solver services including the design goals that drove the development, and relationships and interactions among the various classes. The process of assembling and manipulating linear systems will be described, as well as access to solution methods and other operations.

More Details

Finite Element Interface to Linear Solvers (FEI) version 2.9 : users guide and reference manual

Williams, Alan B.

The Finite Element Interface to Linear Solvers (FEI) is a linear system assembly library. Sparse systems of linear equations arise in many computational engineering applications, and the solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver package capable of solving all of the linear systems that arise. This motivates the need to switch an application from one solver library to another, depending on the problem being solved. The interfaces provided by various solver libraries for data assembly and problem solution differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application can be greatly reduced by having an abstraction layer that puts a 'common face' on various solver libraries. The FEI has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory. The original FEI offered several advantages over using linear algebra libraries directly, but also imposed significant limitations and disadvantages. A new set of interfaces has been added with the goal of removing the limitations of the original FEI while maintaining and extending its strengths.

More Details

LDRD final report on massively-parallel linear programming : the parPCx system

Boman, Erik G.; Phillips, Cynthia A.

This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.

More Details

Photo-control of nanointeractions

Bell, Nelson S.; Jamison, Gregory M.; Marbury, Justin L.; Piech, Marcin P.; Thomes, William J.; Staiger, Chad L.

The manipulation of physical interactions between structural moieties on the molecular scale is a fundamental hurdle in the realization and operation of nanostructured materials and high surface area microsystem architectures. These include such nano-interaction-based phenomena as self-assembly, fluid flow, and interfacial tribology. The proposed research utilizes photosensitive molecular structures to tune such interactions reversibly. This new material strategy provides optical actuation of nano-interactions impacting behavior on both the nano- and macroscales and with potential to impact directed nanostructure formation, microfluidic rheology, and tribological control.

More Details

Characteristics and sources of intermediate size particles in recovery boilers : final project report

Shaddix, Christopher R.

As part of the U.S. Department of Energy (DOE) Office of Industrial Technologies (OIT) Industries of the Future (IOF) Forest Products research program, a collaborative investigation was conducted on the sources, characteristics, and deposition of particles intermediate in size between submicron fume and carryover in recovery boilers. Laboratory experiments on suspended-drop combustion of black liquor and on black liquor char bed combustion demonstrated that both processes generate intermediate size particles (ISP), amounting to 0.5-2% of the black liquor dry solids mass (BLS). Measurements in two U.S. recovery boilers show variable loadings of ISP in the upper furnace, typically between 0.6-3 g/Nm{sup 3}, or 0.3-1.5% of BLS. The measurements show that the ISP mass size distribution increases with size from 5-100 {micro}m, implying that a substantial amount of ISP inertially deposits on steam tubes. ISP particles are depleted in potassium, chlorine, and sulfur relative to the fuel composition. Comprehensive boiler modeling demonstrates that ISP concentrations are substantially overpredicted when using a previously developed algorithm for ISP generation. Equilibrium calculations suggest that alkali carbonate decomposition occurs at intermediate heights in the furnace and may lead to partial destruction of ISP particles formed lower in the furnace. ISP deposition is predicted to occur in the superheater sections, at temperatures greater than 750 C, when the particles are at least partially molten.

More Details

Characterization of soot properties in two-meter JP-8 pool fires

Jensen, Kirk A.; Suo-Anttila, Jill M.

The thermal hazard posed by large hydrocarbon fires is dominated by the radiative emission from high temperature soot. Since the optical properties of soot, especially in the infrared region of the electromagnetic spectrum, as well as its morphological properties, are not well known, efforts are underway to characterize these properties. Measurements of these soot properties in large fires are important for heat transfer calculations, for interpretation of laser-based diagnostics, and for developing soot property models for fire field models. This research uses extractive measurement diagnostics to characterize soot optical properties, morphology, and composition in 2 m pool fires. For measurement of the extinction coefficient, soot extracted from the flame zone is transported to a transmission cell where measurements are made using both visible and infrared lasers. Soot morphological properties are obtained by analysis via transmission electron microscopy of soot samples obtained thermophoretically within the flame zone, in the overfire region, and in the transmission cell. Soot composition, including carbon-to-hydrogen ratio and polycyclic aromatic hydrocarbon concentration, is obtained by analysis of soot collected on filters. Average dimensionless extinction coefficients of 8.4 {+-} 1.2 at 635 nm and 8.7 {+-} 1.1 at 1310 nm agree well with recent measurements in the overfire region of JP-8 and other fuels in lab-scale burners and fires. Average soot primary particle diameters, radius of gyration, and fractal dimensions agree with these recent studies. Rayleigh-Debye-Gans theory of scattering applied to the measured fractal parameters shows qualitative agreement with the trends in measured dimensionless extinction coefficients. Results of the density and chemistry are detailed in the report.

More Details

Uncertainty analysis of heat flux measurements estimated using a one-dimensional, inverse heat-conduction program

Figueroa Faria, Victor G.

The measurement of heat flux in hydrocarbon fuel fires (e.g., diesel or JP-8) is difficult due to high temperatures and the sooty environment. Un-cooled commercially available heat flux gages do not survive in long duration fires, and cooled gages often become covered with soot, thus changing the gage calibration. An alternate method that is rugged and relatively inexpensive is based on inverse heat conduction methods. Inverse heat-conduction methods estimate absorbed heat flux at specific material interfaces using temperature/time histories, boundary conditions, material properties, and usually an assumption of one-dimensional (1-D) heat flow. This method is commonly used at Sandia.s fire test facilities. In this report, an uncertainty analysis was performed for a specific example to quantify the effect of input parameter variations on the estimated heat flux when using the inverse heat conduction method. The approach used was to compare results from a number of cases using modified inputs to a base-case. The response of a 304 stainless-steel cylinder [about 30.5 cm (12-in.) in diameter and 0.32-cm-thick (1/8-in.)] filled with 2.5-cm-thick (1-in.) ceramic fiber insulation was examined. Input parameters of an inverse heat conduction program varied were steel-wall thickness, thermal conductivity, and volumetric heat capacity; insulation thickness, thermal conductivity, and volumetric heat capacity, temperature uncertainty, boundary conditions, temperature sampling period; and numerical inputs. One-dimensional heat transfer was assumed in all cases. Results of the analysis show that, at the maximum heat flux, the most important parameters were temperature uncertainty, steel thickness and steel volumetric heat capacity. The use of a constant thermal properties rather than temperature dependent values also made a significant difference in the resultant heat flux; therefore, temperature-dependent values should be used. As an example, several parameters were varied to estimate the uncertainty in heat flux. The result was 15-19% uncertainty to 95% confidence at the highest flux, neglecting multidimensional effects.

More Details
Results 87076–87100 of 99,299
Results 87076–87100 of 99,299