Publications

Results 76226–76250 of 99,299

Search results

Jump to search filters

Discussion tracking in enron email using PARAFAC

Survey of Text Mining II: Clustering, Classification, and Retrieval

Bader, Brett W.; Berry, Michael W.; Browne, Murray

In this chapter, we apply a nonnegative tensor factorization algorithm to extract and detect meaningful discussions from electronic mail messages for a period of one year. For the publicly released Enron electronic mail collection, we encode a sparse term-author-month array for subsequent three-way factorization using the PARAllel FACtors (or PARAFAC) three-way decomposition first proposed by Harshman. Using nonnegative tensors, we preserve natural data nonnegativity and avoid subtractive basis vector and encoding interactions present in techniques such as principal component analysis. Results in thread detection and interpretation are discussed in the context of published Enron business practices and activities, and benchmarks addressing the computational complexity of our approach are provided. The resulting tensor factorizations can be used to produce Gantt-like charts that can be used to assess the duration, order, and dependencies of focused discussions against the progression of time. © 2008 Springer-Verlag London.

More Details

Finite element solution of optimal control problems arising in semiconductor modeling

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Bochev, Pavel B.; Ridzal, Denis

Optimal design, parameter estimation, and inverse problems arising in the modeling of semiconductor devices lead to optimization problems constrained by systems of PDEs. We study the impact of different state equation discretizations on optimization problems whose objective functionals involve flux terms. Galerkin methods, in which the flux is a derived quantity, are compared with mixed Galerkin discretizations where the flux is approximated directly. Our results show that the latter approach leads to more robust and accurate solutions of the optimization problem, especially for highly heterogeneous materials with large jumps in material properties. © 2008 Springer.

More Details

New applications of the verdict library for standardized mesh verification pre, post, and end-to-end processing

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Pébay, Philippe P.; Thompson, David; Shepherd, Jason F.; Knupp, Patrick K.; Lisle, Curtis; Magnotta, Vincent A.; Grosland, Nicole M.

Verdict is a collection of subroutines for evaluating the geometric qualities of triangles, quadrilaterals, tetrahedra, and hexahedra using a variety of functions. A quality is a real number assigned to one of these shapes depending on its particular vertex coordinates. These functions are used to evaluate the input to finite element, finite volume, boundary element, and other types of solvers that approximate the solution to partial differential equations defined over regions of space. This article describes the most recent version of Verdict and provides a summary of the main properties of the quality functions offered by the library. It finally demonstrates the versatility and applicability of Verdict by illustrating its use in several scientific applications that pertain to pre, post, and end-to-end processing.

More Details

A selective approach to conformal refinement of unstructured hexahedral finite element meshes

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Parrish, Michael; Borden, Michael; Staten, Matthew; Benzley, Steven

Hexahedral refinement increases the density of an all-hexahedral mesh in a specified region, improving numerical accuracy. Previous research using solely sheet refinement theory made the implementation computationally expensive and unable to effectively handle concave refinement regions and self-intersecting hex sheets. The Selective Approach method is a new procedure that combines two diverse methodologies to create an efficient and robust algorithm able to handle the above stated problems. These two refinement methods are: 1) element by element refinement and 2) directional refinement. In element by element refinement, the three inherent directions of a Hex are refined in one step using one of seven templates. Because of its computational superiority over directional refinement, but its inability to handle concavities, element by element refinement is used in all areas of the specified region except regions local to concavities. The directional refinement scheme refines the three inherent directions of a hexahedron separately on a hex by hex basis. This differs from sheet refinement which refines hexahedra using hex sheets. Directional refinement is able to correctly handle concave refinement regions. A ranking system and propagation scheme allow directional refinement to work within the confines of the Selective Approach Algorithm.

More Details

Methods and applications of generalized sheet insertion for hexahedral meshing

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Merkley, Karl; Ernst, Corey; Shepherd, Jason F.; Borden, Michael J.

This paper presents methods and applications of sheet insertion in a hexahedral mesh. A hexahedral sheet is dual to a layer of hexahedra in a hexahedral mesh. Because of symmetries within a hexahedral element, every hexahedral mesh can be viewed as a collection of these sheets. It is possible to insert new sheets into an existing mesh, and these new sheets can be used to define new mesh boundaries, refine the mesh, or in some cases can be used to improve quality in an existing mesh. Sheet insertion has a broad range of possible applications including mesh generation, boundary refinement, R-adaptivity and joining existing meshes. Examples of each of these applications are demonstrated.

More Details

Performance of a pulsed ion beam with a renewable cryogenically cooled ion source

Laser and Particle Beams

Renk, T.J.; Mann, Gregory A.; Torres, G.A.

For operation of an ion source in an intense ion beam diode, it is desirable to form a localized and robust source of high purity. A cryogenically operated ion source has great promise, since the ions are formed from a condensed high-purity gas, which has been confined to a relatively thin ice layer on the anode surface. Previous experiments have established the principles of operation of such an ion source, but have been limited in repetitive duration due to the use of short-lived liquid He cooling of the anode surface. We detail here the successful development of a Cryo-Diode in which the cooling was achieved with a closed-cycle cryo-pump. This results in an ion source design that can potentially be operated for an indefinite duration. Time-of-flight measurements with Faraday cups indicate that the resultant ion beam is of high-purity, and composed of singly charged ions formed out of the gas frozen out on the anode surface. © 2008 Copyright Cambridge University Press 2008.

More Details

Comparison of laboratory-scale solute transport visualization experiments with numerical simulation using cross-bedded sandstone

Advances in Water Resources

Tidwell, Vincent C.; Mckenna, Sean A.

Using a slab of Massillon Sandstone, laboratory-scale solute tracer experiments were carried out to test numerical simulations using the Advection-Dispersion Equation (ADE). While studies of a similar nature exist, our work differs in that we combine: (1) experimentation in naturally complex geologic media, (2) X-ray absorption imaging to visualize and quantify two-dimensional solute transport, (3) high resolution transport property characterization, with (4) numerical simulation. The simulations use permeability, porosity, and solute concentration measured to sub-centimeter resolution. While bulk breakthrough curve characteristics were adequately matched, large discrepancies exist between the experimental and simulated solute concentration fields. Investigation of potential experimental errors suggests that the failure to fit solute concentration fields may lie in loss of intricate connectivity within the cross-bedded sandstone occurring at scales finer than our property characterization measurements (i.e., sub-centimeter). © 2008 Elsevier Ltd. All rights reserved.

More Details

Application specific compression : final report

Melgaard, David K.; Lewis, Phillip; Lee, David S.; Carlson, Jeffrey; Byrne, Raymond H.; Harrison, Carol D.

With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

More Details

Neutral atom traps

Pack, Michael P.

This report describes progress in designing a neutral atom trap capable of trapping sub millikelvin atom in a magnetic trap and shuttling the atoms across the atom chip from a collection area to an optical cavity. The numerical simulation and atom chip design are discussed. Also, discussed are preliminary calculations of quantum noise sources in Kerr nonlinear optics measurements based on electromagnetically induced transparency. These types of measurements may be important for quantum nondemolition measurements at the few photon limit.

More Details

Homeland security R&D roadmapping : risk-based methodological options

Brandt, Larry D.

The Department of Energy (DOE) National Laboratories support the Department of Homeland Security (DHS) in the development and execution of a research and development (R&D) strategy to improve the nation's preparedness against terrorist threats. Current approaches to planning and prioritization of DHS research decisions are informed by risk assessment tools and processes intended to allocate resources to programs that are likely to have the highest payoff. Early applications of such processes have faced challenges in several areas, including characterization of the intelligent adversary and linkage to strategic risk management decisions. The risk-based analysis initiatives at Sandia Laboratories could augment the methodologies currently being applied by the DHS and could support more credible R&D roadmapping for national homeland security programs. Implementation and execution issues facing homeland security R&D initiatives within the national laboratories emerged as a particular concern in this research.

More Details

Enhanced Geothermal Systems (EGS) Well Construction Technology Evaluation Report

Polsky, Yarom; Knudsen, Steven D.; Raymond, David W.

This report provides an assessment of well construction technology for EGS with two primary objectives: 1. Determining the ability of existing technologies to develop EGS wells. 2. Identifying critical well construction research lines and development technologies that are likely to enhance prospects for EGS viability and improve overall economics.

More Details

Parallel tetrahedral mesh refinement with MOAB

Thompson, David; Pebay, Philippe P.

In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.

More Details
Results 76226–76250 of 99,299
Results 76226–76250 of 99,299