Various aspects of mesh quality are surveyed to clarify the disconnect between the traditional uses of mesh quality metrics within industry and the fact that quality ultimately depends on the solution to the physical problem. Truncation error analysis for ffnite difference methods reveals no clear connection to most traditional mesh quality metrics. Finite element bounds to the interpolation error can be shown, in some cases, to be related to known quality metrics such as the condition number. On the other hand, the use of quality metrics that do not take solution characteristics into account can be valid in certain circumstances, primarily as a means of automatically detecting defective meshes. The use of such metrics when applied to simulations for which quality is highly-dependent on the physical solution is clearly inappropriate. Various ffaws and problems with existing quality metrics are mentioned, along with a discussion on the use of threshold values. In closing, the author advocates the investigation of explicitly-referenced quality metrics as a potential means of bridging the gap between a priori quality metrics and solution-dependent metrics.
The focus of this paper is on the development of validated models for wind turbine blades. Validation of these models is a comprehensive undertaking which requires carefully designing and executing experiments, proposing appropriate physics-based models, and applying correlation techniques to improve these models based on the test data. This paper will cover each of these three aspects of model validation, although the focus is on the third - model calibration. The result of the validation process is an understanding of the credibility of the model when used to make analytical predictions. These general ideas will be applied to a wind turbine blade designed, tested, and modeled at Sandia National Laboratories. The key points of the paper include discussions of the tests which are needed, the required level of detail in these tests to validate models of varying detail, and mathematical techniques for improving blade models. Results from investigations into calibrating simplified blade models are presented.
In 2002, Sandia National Laboratories (SNL) initiated a research program to demonstrate the use of carbon fiber in wind turbine blades and to investigate advanced structural concepts through the Blade Systems Design Study, known as the BSDS. One of the blade designs resulting from this program, commonly referred to as the BSDS blade, resulted from a systems approach in which manufacturing, structural and aerodynamic performance considerations were all simultaneously included in the design optimization. The BSDS blade design utilizes "flatback" airfoils for the inboard section of the blade to achieve a lighter, stronger blade. Flatback airfoils are generated by opening up the trailing edge of an airfoil uniformly along the camber line, thus preserving the camber of the original airfoil. This process is in distinct contrast to the generation of truncated airfoils, where the trailing edge the airfoil is simply cut off, changing the camber and subsequently degrading the aerodynamic performance. Compared to a thick conventional, sharp trailing-edge airfoil, a flatback airfoil with the same thickness exhibits increased lift and reduced sensitivity to soiling. Although several commercial turbine manufacturers have expressed interest in utilizing flatback airfoils for their wind turbine blades, they are concerned with the potential extra noise that such a blade will generate from the blunt trailing edge of the flatback section. In order to quantify the noise generation characteristics of flatback airfoils, Sandia National Laboratories has conducted a wind tunnel test to measure the noise generation and aerodynamic performance characteristics of a regular DU97-300-W airfoil, a 10% trailing edge thickness flatback version of that airfoil, and the flatback fitted with a trailing edge treatment. The paper describes the test facility, the models, and the test methodology, and provides some preliminary results from the test.
This report focuses on our recent advances in the fabrication and processing of barium strontium titanate (BST) thin films by chemical solution depositiion for next generation fuctional integrated capacitors. Projected trends for capacitors include increasing capacitance density, decreasing operating voltages, decreasing dielectric thickness and decreased process cost. Key to all these trends is the strong correlation of film phase evolution and resulting microstructure, it becomes possible to tailor the microstructure for specific applications. This interplay will be discussed in relation to the resulting temperature dependent dielectric response of the BST films.
Geometric features with characteristic lengths on the order of the size of the contact patch interface may be at least partly responsible for the variability observed in experimental measurements of structural stiffness and energy dissipation per cycle in a bolted joint. Experiments on combinations of two different types of joints (statically determinate single-joint and statically indeterminate three-joint structures) of nominally identical hardware show that the structural stiffness of the tested specimens varies by up to 25% and the energy dissipation varies by up to nearly 300%. A pressure-sensitive film was assembled into the interfaces of jointed structures to gain a qualitative understanding of the distribution of interfacial pressures of nominally conformal surfaces. The resultant pressure distributions suggest that there are misfit mechanisms that may influence contact patch geometry and also structural response of the interface. These mechanisms include local plateaus and machining induced waviness. The mechanisms are not consistent across nominally machined hardware interfaces. The proposed misfit mechanisms may be partly responsible for the variability in energy dissipation per cycle of joint experiments.
Damping in a micro-cantilever beam was measured for a very broad range of air pressures from atmosphere (10 5 Pa) down to 0.2 Pa. The beam was in open space free from squeeze films. The damping ratio, due mainly to air drag, varied by a factor of 10 4 within this pressure range. The damping due to air drag was separated from other sources of energy dissipation so that air damping could be measured at 10 -6 of critical damping factor. The linearity of the damping was confirmed over a wide range of beam vibration levels. Lastly, the measured damping was compared with several existing theories for air-drag damping for both rarified and viscous flow gas theories. The measured data indicate that, in the rarefied regime the air damping is proportional to pressure, independent of viscosity, and in the viscous regime the damping is determined by viscosity.
The development of transmitter and receiver Multichip Module subassemblies implemented in LTCC for an S-band radar application followed an approach that reduces the number of discrete devices and increases reliability. The LTCC MCM incorporates custom GaAs RF integrated circuits in faraday cavities, novel methods of reducing line resistance and enhancing lumped element Q, and a thick film back plane which attaches to a heat sink. The incorporation of PIN diodes on the receiver and a 50W power amplifier on the transmitter required methods for removing heat beyond what thermal vias can accomplish. The die is a high voltage pHEMT GaAs power amplifier RFIC chip that measures 6.5 mm × 8 mm. Although thermal vias are adequate in certain cases, the thermal solution includes heat spreaders and thermally conductive backplates. Processing hierarchy, including gold-tin die attach and various use of polymeric attachment, must allow rework on these prototypical devices. LTCC cavity covers employ metallic coatings on their exterior surfaces. The processing of the LTCC and its effect on the function of the transmitter and receiver circuits is discussed in the poster session.
American Solar Energy Society - SOLAR 2008, Including Proc. of 37th ASES Annual Conf., 33rd National Passive Solar Conf., 3rd Renewable Energy Policy and Marketing Conf.: Catch the Clean Energy Wave
This paper summarizes operational histories of three Russian-designed photovoltaic (PV) lighthouses in Norway and Russia. All lighthouses were monitored to evaluate overall system and Nickel Cadmium (NiCad) battery bank performance to determine battery capacity, charging trends, temperature, and reliability. The practical use of PV in this unusual mode, months of battery charging followed by months of battery discharging, is documented and assessed. This paper presents operational data obtained from 2004 through 2007.
The Cognitive Foundry is a unified collection of tools for Cognitive Science and Technology applications, supporting the development of intelligent agent models. The Foundry has two primary components designed to facilitate agent construction: the Cognitive Framework and Machine Learning packages. The Cognitive Framework provides design patterns and default implementations of an architecture for evaluating theories of cognition, as well as a suite of tools to assist in the building and analysis of theories of cognition. The Machine Learning package provides tools for populating components of the Cognitive Framework from domain-relevant data using automated knowledge-capture techniques. This paper describes the Cognitive Foundry with a focus on its application within the context of agent behavior modeling.
Simulation of potential radionuclide transport in the saturated zone from beneath the proposed repository at Yucca Mountain to the accessible environment is an important aspect of the total system performance assessment (TSPA) for disposal of high-level radioactive waste at the site. Analyses of uncertainty and sensitivity are integral components of the TSPA and have been conducted at both the sub-system and system levels to identify parameters and processes that contribute to the overall uncertainty in predictions of repository performance. Results of the sensitivity analyses indicate that uncertainty in groundwater specific discharge along the flow path in the saturated zone from beneath the repository is an important contributor to uncertainty in TSPA results and is the dominant source of uncertainty in transport times in the saturated zone for most radionuclides. Uncertainties in parameters related to matrix diffusion in the volcanic units, colloid-facilitated transport, and sorption are also important contributors to uncertainty in transport times to differing degrees for various radionuclides.
The drift-shadow effect describes capillary diversion of water flow around a drift or cavity in porous or fractured rock, resulting in lower water flux directly beneath the cavity. This paper presents computational simulations of drift-shadow experiments using dual-permeability models, similar to the models used for performance assessment analyses of flow and seepage in unsaturated fractured tuff at Yucca Mountain. Results show that the dual-penneability models capture the salient trends and behavior observed in the experiments, but constitutive relations (e.g., fracture capillary-pressure curves) can significantly affect the simulated results. An evaluation of different meshes showed that at the grid refinement used, a comparison between orthogonal and unstructured meshes did not result in large differences.
Uncertainty and sensitivity analyses of the expected dose to the reasonably maximally exposed individual in the Yucca Mountain 2008 total system performance assessment (TSPA) are presented. Uncertainty results are obtained with Latin hypercube sampling of epistemic uncertain inputs, and partial rank correlation coefficients are used to illustrate sensitivity analysis results.
The Total System Performance Assessment (TSPA) for the proposed high level radioactive waste repository at Yucca Mountain, Nevada, uses a sampling-based approach to uncertainty and sensitivity analysis. Specifically, Latin hypercube sampling is used to generate a mapping between epistemically uncertain analysis inputs and analysis outcomes of interest. This results in distributions that characterize the uncertainty in analysis outcomes. Further, the resultant mapping can be explored with sensitivity analysis procedures based on (i) examination of scatterplots, (ii) partial rank correlation coefficients, (iii) R2 values and standardized rank regression coefficients obtained in stepwise rank regression analyses, and (iv) other analysis techniques. The TSPA considers over 300 epistemically uncertain inputs (e.g., corrosion properties, solubilities, retardations, defining parameters for Poisson processes, ⋯) and over 70 time-dependent analysis outcomes (e.g., physical properties in waste packages and the engineered barrier system, releases from the engineered barrier system, the unsaturated zone and the saturated zone for individual radionuclides, and annual dose to the reasonably maximally exposed individual (RMEI) from both individual radionuclides and all radionuclides. The obtained uncertainty and sensitivity analysis results play an important role in facilitating understanding of analysis results, supporting analysis verification, establishing risk importance, and enhancing overall analysis credibility. The uncertainty and sensitivity analysis procedures are illustrated and explained with selected results for releases from the engineered barrier system, the unsaturated zone and the saturated zone and also for annual dose to the RMEI.
We present a model to evaluate the water mass balance inside a breached waste package in Yucca Mountain (YM) repository environments. The amount of water as liquid or vapor that can accumulate inside or percolate through the package in the emplacement drift is modeled as a function of the temperature and relative humidity (RH) near the waste package, the dripping rate of water from seepage, the area of failure patches on the waste package, and the extent of waste degradation. The water activity inside the waste package is assumed to be determined by both matric and osmotic potentials in the porous waste degradation products that also includes hygroscopic salts. We implemented the model and conducted a set of Monte Carlo simulations to gain insight into the variability and uncertainty associated with model predictions. The model shows that water vapor diffusion can be as important as the advective seepage flow. In addition, chemical reactions during waste degradation can consume a significant fraction of water accumulated in the waste package.
This report evaluates transportation risk for nuclear material in the proposed Global Nuclear Energy Partnership (GNEP) fuel cycle. Since many details of the GNEP program are yet to be determined, this document is intended only to identify general issues. The existing regulatory environment is determined to be largely prepared to incorporate the changes that the GNEP program will introduce. Nuclear material vulnerability and attractiveness are considered with respect to the various transport stages within the GNEP fuel cycle. It is determined that increased transportation security will be required for the GNEP fuel cycle, particularly for international transport. Finally, transportation considerations for several fuel cycle scenarios are discussed. These scenarios compare the current "once-through" fuel cycle with various aspects of the proposed GNEP fuel cycle.
The advent of the nuclear renaissance gives rise to a concern for the effective design of nuclear fuel cycle systems that are safe, secure, nonproliferating and cost-effective. We propose to integrate the monitoring of the four major factors of nuclear facilities by focusing on the interactions between Safeguards, Operations, Security, and Safety (SOSS). We proposed to develop a framework that monitors process information continuously and can demonstrate the ability to enhance safety, operations, security, and safeguards by measuring and reducing relevant SOSS risks, thus ensuring the safe and legitimate use of the nuclear fuel cycle facility. A real-time comparison between expected and observed operations provides the foundation for the calculation of SOSS risk. The automation of new nuclear facilities requiring minimal manual operation provides an opportunity to utilize the abundance of process information for monitoring SOSS risk. A framework that monitors process information continuously can lead to greater transparency of nuclear fuel cycle activities and can demonstrate the ability to enhance the safety, operations, security and safeguards associated with the functioning of the nuclear fuel cycle facility. Sandia National Laboratories (SNL) has developed a risk algorithm for safeguards and is in the process of demonstrating the ability to monitor operational signals in real-time though a cooperative research project with the Japan Atomic Energy Agency (JAEA). The risk algorithms for safety, operations and security are under development. The next stage of this work will be to integrate the four algorithms into a single framework.
This paper summarizes the results of a Phenomena Identification and Ranking Table (PIRT) exercise performed for nuclear power plant (NPP) fire modeling applications conducted on behalf of the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research (RES). A PIRT exercise is a formalized, facilitated expert elicitation process. In this case, the expert panel was comprised of seven international fire science experts and was facilitated by Sandia National Laboratories (SNL). The objective of a PIRT exercise is to identify key phenomena associated with the intended application and to then rank the importance and current state of knowledge of each identified phenomenon. One intent of this process is to provide input into the process of identifying and prioritizing future research efforts. In practice, the panel considered a series of specific fire scenarios based on scenarios typically considered in NPP applications. Each scenario includes a defined figure of merit; that is, a specific goal to be achieved in analyzing the scenario through the application of fire modeling tools. The panel identifies any and all phenomena relevant to a fire modeling-based analysis for the figure of merit. Each phenomenon is ranked relative to its importance to the fire model outcome and then further ranked against the existing state of knowledge and adequacy of existing modeling tools to predict that phenomenon. The PIRT panel covered several fire scenarios and identified a number of areas potentially in need of further fire modeling improvements. The paper summarizes the results of the ranking exercise.
Ceramic samples of Pb0.99La0.01 (Zr 0.91Ti0.09)O3 were studied by dielectric and time-of-flight neutron diffraction measurements at 300 and 250 K versus pressure. Isothermal dielectric data (300/250 K) suggest structural transitions with onsets near 0.35/0.37 GPa, respectively, for increasing pressure. On pressure release, only the 300K transition occurs (0.10 GPa; none indicated at 250 K). Diffraction data at 300 K show the sample has the R3c structure, remaining in that phase cooling to 250 K. Pressure increase (either 300 or 250 K) above 0.3 GPa yields a Pnma-like (AO) phase (two other prominent peaks in the spectra suggest a possible incommensurate cell). Temperature/pressure excursions show considerable phase hysteresis.
American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008
Sevougian, S.D.; Behie, Alda; Chipman, Veraun; Gross, Michael B.; Mehta, Sunil; Statham, William
The representation of disruptive events (seismic and igneous events) and early failures of waste packages and drip shields in the 2008 total system performance assessment (TSPA) for the proposed high-level radioactive waste repository at Yucca Mountain, Nevada is described, in the context of the 2008 TSPA, disruptive events and early failures are treated as phenomena that occur randomly (e.g., the time of a seismic event) and also have properties that are random (e.g., the peak ground velocity associated with a seismic event). Specifically the following potential disruptions are considered: (i) early failure of individual drip shields, (ii) early failure of individual waste packages, (iii) igneous intrusion events that result in the filling of the waste disposal drifts with magma, (iv) volcanic eruption events that result in the dispersal of waste into the atmosphere, (v) seismic events that damage waste packages and drip shields as a result of strong vibratory ground motion, and (vi) seismic events that damage waste packages and drip shields as a result of shear displacement along a fault. Example annual dose results are shown for the two most risk-significant events: strong seismic ground motion and igneous intrusion.
The development of separation distances for hydrogen facilities can be determined in several ways. A conservative approach is to use the worst possible accidents in terms of consequences. Such accidents may be of very low frequency and would likely never occur. Although this approach bounds separation distances, the resulting distances are generally prohibitive. The current separation distances in hydrogen codes and standards do not reflect this approach. An alternative deterministic approach that is often utilized by standards development organizations and allowed under some regulations is to select accident scenarios that are more probable but do not provide bounding consequences. In this approach, expert opinion is generally used to select the accidents used as the basis for the prescribed separation distances.
Proceedings - 2008 International Symposium on Microelectronics, IMAPS 2008
Knudson, R.T.; Barner, Greg; Smith, Frank; Zawicki, Larry; Peterson, Ken
Full tape thickness features (FTTF) using conductors, high K and low K dielectrics, sacrificial volume materials, and magnetic materials are useful as both technically and cost-effective approaches to multiple needs in laminate microelectronic and microsystem structures. Lowering resistance in conductor traces of all kinds, raising Q-factors in coils, and enhancing EMI shielding in RF desingns are a few of the modern needs. By filling with suitable dielectric compositions one can deliver embedded capacitors with an appropriate balance between mechanical compatibility and safety factor for fabrication. Similar techniques could be applied to magnetic materials without wasteful manufacturing processes when the magnetic material is a small fraction of the overall circuit area. Finally, to open the technology of unfilled volumes for radio frequency performance as well as microfluidics and mixed cofired material applications, the full tape thickness implementation of sacrificial volume materials is also considered. We discuss implementations of FTTF structures and discuss technical problems and the promise such structures hold for the future.
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
This report describes progress in designing a neutral atom trap capable of trapping sub millikelvin atom in a magnetic trap and shuttling the atoms across the atom chip from a collection area to an optical cavity. The numerical simulation and atom chip design are discussed. Also, discussed are preliminary calculations of quantum noise sources in Kerr nonlinear optics measurements based on electromagnetically induced transparency. These types of measurements may be important for quantum nondemolition measurements at the few photon limit.
Communities of vertices within a giant network such as the World-Wide-Web are likely to be vastly smaller than the network itself. However, Fortunato and Barthelemy have proved that modularity maximization algorithms for community detection may fail to resolve communities with fewer than {radical} L/2 edges, where L is the number of edges in the entire network. This resolution limit leads modularity maximization algorithms to have notoriously poor accuracy on many real networks. Fortunato and Barthelemy's argument can be extended to networks with weighted edges as well, and we derive this corollary argument. We conclude that weighted modularity algorithms may fail to resolve communities with fewer than {radical} W{epsilon}/2 total edge weight, where W is the total edge weight in the network and {epsilon} is the maximum weight of an inter-community edge. If {epsilon} is small, then small communities can be resolved. Given a weighted or unweighted network, we describe how to derive new edge weights in order to achieve a low {epsilon}, we modify the 'CNM' community detection algorithm to maximize weighted modularity, and show that the resulting algorithm has greatly improved accuracy. In experiments with an emerging community standard benchmark, we find that our simple CNM variant is competitive with the most accurate community detection methods yet proposed.
The Department of Energy (DOE) National Laboratories support the Department of Homeland Security (DHS) in the development and execution of a research and development (R&D) strategy to improve the nation's preparedness against terrorist threats. Current approaches to planning and prioritization of DHS research decisions are informed by risk assessment tools and processes intended to allocate resources to programs that are likely to have the highest payoff. Early applications of such processes have faced challenges in several areas, including characterization of the intelligent adversary and linkage to strategic risk management decisions. The risk-based analysis initiatives at Sandia Laboratories could augment the methodologies currently being applied by the DHS and could support more credible R&D roadmapping for national homeland security programs. Implementation and execution issues facing homeland security R&D initiatives within the national laboratories emerged as a particular concern in this research.
This report provides an assessment of well construction technology for EGS with two primary objectives: 1. Determining the ability of existing technologies to develop EGS wells. 2. Identifying critical well construction research lines and development technologies that are likely to enhance prospects for EGS viability and improve overall economics.
In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.
The Advanced Engineering Environment (AEE) is a model for an engineering design and communications system that will enhance project collaboration throughout the nuclear weapons complex (NWC). Sandia National Laboratories and Parametric Technology Corporation (PTC) worked together on a prototype project to evaluate the suitability of a portion of PTC's Windchill 9.0 suite of data management, design and collaboration tools as the basis for an AEE. The AEE project team implemented Windchill 9.0 development servers in both classified and unclassified domains and used them to test and evaluate the Windchill tool suite relative to the needs of the NWC using weapons project use cases. A primary deliverable was the development of a new real time collaborative desktop design and engineering process using PDMLink (data management tool), Pro/Engineer (mechanical computer aided design tool) and ProductView Lite (visualization tool). Additional project activities included evaluations of PTC's electrical computer aided design, visualization, and engineering calculations applications. This report documents the AEE project work to share information and lessons learned with other NWC sites. It also provides PTC with recommendations for improving their products for NWC applications.
Scoping studies have demonstrated that ceragenins, when linked to water-treatment membranes have the potential to create biofouling resistant water-treatment membranes. Ceragenins are synthetically produced molecules that mimic antimicrobial peptides. Evidence includes measurements of CSA-13 prohibiting the growth of and killing planktonic Pseudomonas fluorescens. In addition, imaging of biofilms that were in contact of a ceragenin showed more dead cells relative to live cells than in a biofilm that had not been treated with a ceragenin. This work has demonstrated that ceragenins can be attached to polyamide reverse osmosis (RO) membranes, though work needs to improve the uniformity of the attachment. Finally, methods have been developed to use hyperspectral imaging with multivariate curve resolution to view ceragenins attached to the RO membrane. Future work will be conducted to better attach the ceragenin to the RO membranes and more completely test the biocidal effectiveness of the ceragenins on the membranes.
We have conducted a molecular dynamics (MD) simulation study of water confined between methyl-terminated and carboxyl-terminated alkylsilane self-assembled monolayers (SAMs) on amorphous silica substrates. In doing so, we have investigated the dynamic and structural behavior of the water molecules when compressed to loads ranging from 20 to 950 MPa for two different amounts of water (27 and 58 water molecules/nm{sup 2}). Within the studied range of loads, we observe that no water molecules penetrate the hydrophobic region of the carboxyl-terminated SAMs. However, we observe that at loads larger than 150 MPa water molecules penetrate the methyl-terminated SAMs and form hydrogen-bonded chains that connect to the bulk water. The diffusion coefficient of the water molecules decreases as the water film becomes thinner and pressure increases. When compared to bulk diffusion coefficients of water molecules at the various loads, we found that the diffusion coefficients for the systems with 27 water molecules/nm{sup 2} are reduced by a factor of 20 at low loads and by a factor of 40 at high loads, while the diffusion coefficients for the systems with 58 water molecules/nm{sup 2} are reduced by a factor of 25 at all loads.
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it is a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.
Shielded special nuclear material (SNM) is very difficult to detect and new technologies are needed to clear alarms and verify the presence of SNM. High-energy photons and neutrons can be used to actively interrogate for heavily shielded SNM, such as highly enriched uranium (HEU), since neutrons can penetrate gamma-ray shielding and gamma-rays can penetrate neutron shielding. Both source particles then induce unique detectable signals from fission. In this LDRD, we explored a new type of interrogation source that uses low-energy proton- or deuteron-induced nuclear reactions to generate high fluxes of mono-energetic gammas or neutrons. Accelerator-based experiments, computational studies, and prototype source tests were performed to obtain a better understanding of (1) the flux requirements, (2) fission-induced signals, background, and interferences, and (3) operational performance of the source. The results of this research led to the development and testing of an axial-type gamma tube source and the design/construction of a high power coaxial-type gamma generator based on the {sup 11}B(p,{gamma}){sup 12}C nuclear reaction.
Cognitive science research investigates the advancement of human cognition and neuroscience capabilities. Addressing risks associated with these advancements can counter potential program failures, legal and ethical issues, constraints to scientific research, and product vulnerabilities. Survey results, focus group discussions, cognitive science experts, and surety researchers concur technical risks exist that could impact cognitive science research in areas such as medicine, privacy, human enhancement, law and policy, military applications, and national security (SAND2006-6895). This SAND report documents a surety engineering framework and a process for identifying cognitive system technical, ethical, legal and societal risks and applying appropriate surety methods to reduce such risks. The framework consists of several models: Specification, Design, Evaluation, Risk, and Maturity. Two detailed case studies are included to illustrate the use of the process and framework. Several Appendices provide detailed information on existing cognitive system architectures; ethical, legal, and societal risk research; surety methods and technologies; and educing information research with a case study vignette. The process and framework provide a model for how cognitive systems research and full-scale product development can apply surety engineering to reduce perceived and actual risks.
The Heavy Ion Fusion Science Virtual National Laboratory has achieved 60-fold longitudinal pulse compression of ion beams on the Neutralized Drift Compression Experiment (NDCX) [P. K. Roy et al., Phys. Rev. Lett. 95, 234801 (2005)]. To focus a space-charge-dominated charge bunch to sufficiently high intensities for ion-beam-heated warm dense matter and inertial fusion energy studies, simultaneous transverse and longitudinal compression to a coincident focal plane is required. Optimizing the compression under the appropriate constraints can deliver higher intensity per unit length of accelerator to the target, thereby facilitating the creation of more compact and cost-effective ion beam drivers. The experiments utilized a drift region filled with high-density plasma in order to neutralize the space charge and current of an {approx}300 keV K{sup +} beam and have separately achieved transverse and longitudinal focusing to a radius <2 mm and pulse duration <5 ns, respectively. Simulation predictions and recent experiments demonstrate that a strong solenoid (B{sub Z} < 100 kG) placed near the end of the drift region can transversely focus the beam to the longitudinal focal plane. This paper reports on simulation predictions and experimental progress toward realizing simultaneous transverse and longitudinal charge bunch focusing. The proposed NDCX-II facility would capitalize on the insights gained from NDCX simulations and measurements in order to provide a higher-energy (>2 MeV) ion beam user-facility for warm dense matter and inertial fusion energy-relevant target physics experiments.
A system dynamics model was developed in response to the apparent decline in STEM candidates in the United States and a pending shortage. The model explores the attractiveness of STEM and STEM careers focusing on employers and the workforce. Policies such as boosting STEM literacy, lifting the H-1B visa cap, limiting the offshoring of jobs, and maintaining training are explored as possible solutions. The system is complex, with many feedbacks and long time delays, so solutions that focus on a single point of the system are not effective and cannot solve the problem. A deeper understanding of parts of the system that have not been explored to date is necessary to find a workable solution.
This report presents a review and evaluation of software and codes that have been used to support Sandia National Laboratories concentrating solar power (CSP) program. Additional software packages developed by other institutions and companies that can potentially improve Sandia's analysis capabilities in the CSP program are also evaluated. The software and codes are grouped according to specific CSP technologies: power tower systems, linear concentrator systems, and dish/engine systems. A description of each code is presented with regard to each specific CSP technology, along with details regarding availability, maintenance, and references. A summary of all the codes is then presented with recommendations regarding the use and retention of the codes. A description of probabilistic methods for uncertainty and sensitivity analyses of concentrating solar power technologies is also provided.
In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel Search Package), a new software platform which facilitates combining multiple optimization routines into a single, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The framework is designed such that existing optimization source code can be easily incorporated with minimal code modification. By maintaining the integrity of each individual solver, the strengths and code sophistication of the original optimization package are retained and exploited.
This work demonstrated the feasibility and limitations of semiconducting {pi}-conjugated organic polymers for fast neutron detection via n-p elastic scattering. Charge collection in conjugated polymers in the family of substituted poly(p-phenylene vinylene)s (PPV) was evaluated using band-edge laser and proton beam ionization. These semiconducting materials can have high H/C ratio, wide bandgap, high resistivity and high dielectric strength, allowing high field operation with low leakage current and capacitance noise. The materials can also be solution cast, allowing possible low-cost radiation detector fabrication and scale-up. However, improvements in charge collection efficiency are necessary in order to achieve single particle detection with a reasonable sensitivity. The work examined processing variables, additives and environmental effects. Proton beam exposure was used to verify particle sensitivity and radiation hardness to a total exposure of approximately 1 MRAD. Conductivity exhibited sensitivity to temperature and humidity. The effects of molecular ordering were investigated in stretched films, and FTIR was used to quantify the order in films using the Hermans orientation function. The photoconductive response approximately doubled for stretch-aligned films with the stretch direction parallel to the electric field direction, when compared to as-cast films. The response was decreased when the stretch direction was orthogonal to the electric field. Stretch-aligned films also exhibited a significant sensitivity to the polarization of the laser excitation, whereas drop-cast films showed none, indicating improved mobility along the backbone, but poor {pi}-overlap in the orthogonal direction. Drop-cast composites of PPV with substituted fullerenes showed approximately a two order of magnitude increase in photoresponse, nearly independent of nanoparticle concentration. Interestingly, stretch-aligned composite films showed a substantial decrease in photoresponse with increasing stretch ratio. Other additives examined, including small molecules and cosolvents, did not cause any significant increase in photoresponse. Finally, we discovered an inverse-geometric particle track effect wherein increased track lengths created by tilting the detector off normal incidence resulted in decreased signal collection. This is interpreted as a trap-filling effect, leading to increased carrier mobility along the particle track direction. Estimated collection efficiency along the track direction was near 20 electrons/micron of track length, sufficient for particle counting in 50 micron thick films.
The Arquin Corporation designed a CMU (concrete masonry unit) wall construction and reinforcement technique that includes steel wire and polymer spacers that is intended to facilitate a faster and stronger wall construction. Since the construction method for an Arquin-designed wall is different from current wall construction practices, finite element computer analyses were performed to estimate the ability of the wall to withstand a hypothetical dynamic load, similar to that of a blast from a nearby explosion. The response of the Arquin wall was compared to the response of an idealized standard masonry wall exposed to the same dynamic load. Results from the simulations show that the Arquin wall deformed less than the idealized standard wall under such loading conditions. As part of a different effort, Sandia National Laboratories also looked at the relative static response of the Arquin wall, results that are summarized in a separate SAND Report.
This document presents the security automated Risk Assessment Methodology (RAM) prototype tool developed by Sandia National Laboratories (SNL). This work leverages SNL's capabilities and skills in security risk analysis and the development of vulnerability assessment/risk assessment methodologies to develop an automated prototype security RAM tool for critical infrastructures (RAM-CI™). The prototype automated RAM tool provides a user-friendly, systematic, and comprehensive risk-based tool to assist CI sector and security professionals in assessing and managing security risk from malevolent threats. The current tool is structured on the basic RAM framework developed by SNL. It is envisioned that this prototype tool will be adapted to meet the requirements of different CI sectors and thereby provide additional capabilities.
Inductive electromagnetic launchers, or coilguns, use discrete solenoidal coils to accelerate a coaxial conductive armature. To date, Sandia has been using an internally developed code, SLINGSHOT, as a point-mass lumped circuit element simulation tool for modeling coilgun behavior for design and verification purposes. This code has shortcomings in terms of accurately modeling gun performance under stressful electromagnetic propulsion environments. To correct for these limitations, it was decided to attempt to closely couple two Sandia simulation codes, Xyce and ALEGRA, to develop a more rigorous simulation capability for demanding launch applications. This report summarizes the modifications made to each respective code and the path forward to completing interfacing between them.
The Arquin Corporation has developed a new method of constructing CMU (concrete masonry unit) walls. This new method uses polymer spacers connected to steel wires that serve as reinforcing as well as means of accurately placing the spacers so that the concrete block can be dry stacked. The hollows of the concrete block used in constructing the wall are then filled with grout. As part of a New Mexico Small Business Assistance Program (NMSBAP), Sandia National Laboratories conducted a series of tests that statically loaded wall segments to compare the Arquin method to a more traditional method of constructing CMU walls. A total of 12 tests were conducted, three with the Arquin method using a W5 reinforcing wire, three with the traditional method of construction using a number 3 rebar as reinforcing, three with the Arquin method using a W2 reinforcing wire, and three with the traditional construction method but without rebar. The results of the tests showed that the walls constructed with the Arquin method and with a W5 reinforcing wire withstood more load than any of the other three types of walls that were tested.
This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.