Electromagnetic shielding (EMS) requirements become more demanding as isolation requirements exceed 100dB in advanced S-band transceiver designs. Via-hole fences have served such designs well in low temperature cofired ceramic (LTCC) modules when used in 2-3 rows, depending on requirements. Replacing these vias with slots through the full thickness of a tape layer has been modeled and shown to improve isolation. We expand on a technique for replacing these rows of full tape thickness features (FTTF) with a single row of stacked walls which, by sequential punching, can be continuous, providing a solid Faraday cage board element with leak-free seams. We discuss the material incompatibilities and manufacturing considerations that need to be addressed for such structures and show preliminary implementations. We will compare construction of multilayer and single layer designs.
Recent papers have argued for the benefit of a tighter integration of the disciplines of human factors (HF) and human reliability analysis (HRA). While both disciplines are concerned with human performance, HF uses performance data to prescribe optimal human-machine interface (HMI) design, while HRA applies human performance principles and data to model the probabilistic risk of human activities. An overlap between the two disciplines is hindered by the seeming incompatibility of their respective data needs. For example, while HF studies produce data especially about the efficacy of particular system designs, these efficacy data are rarely framed in such a way as to provide the magnitude of the performance effect in terms of human error. While qualitative insights for HRA result from the HF studies, the HF studies often fail to produce data that inform the quantification of human error. In this paper, the author presents a review of the data requirements for HRA and offers suggestions on how to piggyback HRA data collection on existing HF studies. HRA data requirements include specific parameters such as the effect size of the human performance increment or degradation observed and classification of the human performance according to a simple set of performance shaping factors.
Active aerodynamic load control of wind turbine blades has been heavily researched for years by the wind energy research community and shows great promise for reducing turbine fatigue damage. One way to benefit from this technology is to choose to utilize a larger rotor on a turbine tower and drive train to realize increased turbine energy capture while keeping the fatigue damage of critical turbine components at the original levels. To assess this rotor-increase potential, Sandia National Laboratories and FlexSys Inc. performed aero/structural simulations of a 1.5MW wind turbine at mean wind speeds spanning the entire operating range. Moment loads at several critical system locations were post-processed and evaluated for fatigue damage accumulation at each mean wind speed. Combining these fatigue damage estimates with a Rayleigh wind-speed distribution yielded estimates of the total fatigue damage accumulation for the turbine. This simulation procedure was performed for both the turbine baseline system and the turbine system incorporating a rotor equipped with FlexSys active aerodynamic load control devices. The simulation results were post-processed to evaluate the decrease in the blade root flap fatigue damage accumulation provided by the active aero technology. The blade length was increased until the blade root flap fatigue damage accumulation values matched those of the baseline rotor. With the new rotor size determined, the additional energy capture potential was calculated. These analyses resulted in an energy capture increase of 11% for a mean wind speed of 6.5m/s.
Recent advances in nanoparticle inks have enabled inkjet printing of metal traces and interconnects with very low (100-200°C) process temperatures. This has enabled integration of printable electronics such as antennas and radio frequency identification (RFID) tags with polyimide, teflon, PCBs, and other low temperature substrates. We discuss here printing of nanoparticle inks for three dimensional interconnects, and the apparent mechanism of nanoparticle ink conductivity development at these low process temperatures.
This paper describes a set of critical experiments that were done to gather benchmark data on the effects of rhodium in critical systems. Approach-to-critical experiments with arrays of low-enriched water-moderated and -reflected fuel were performed with rhodium foils sandwiched between the fuel pellets in some of the fuel elements. The results of the experiments are compared with results from two Monte Carlo codes using cross sections from ENDF/B-V, ENDF/B-VI, and ENDF/B-VII.
This paper applies a pragmatic interval-based approach to validation of a fire dynamics model involving computational fluid dynamics, combustion, participating-media radiation, and heat transfer. Significant aleatory and epistemic sources of uncertainty exist in the experiments and simulations. The validation comparison of experimental and simulation results, and corresponding criteria and procedures for model affirmation or refutation, take place in "real space" as opposed to "difference space" where subtractive differences between experiments and simulations are assessed. The versatile model validation framework handles difficulties associated with representing and aggregating aleatory and epistemic uncertainties from multiple correlated and uncorrelated source types, including: • experimental variability from multiple repeat experiments • uncertainty of experimental inputs • experimental output measurement uncertainties • uncertainties that arise in data processing and inference from raw simulation and experiment outputs • parameter and model-form uncertainties intrinsic to the model • numerical solution uncertainty from model discretization effects. The framework and procedures of the model validation methodology are here applied to a difficult validation problem involving experimental and predicted calorimeter temperatures in a wind-driven hydrocarbon pool fire.
The significant growth in wind turbine installations in the past few years has fueled new scenarios that envision even larger expansion of U.S. wind electricity generation from the current 1.5% to 20% by 2030. Such goals are achievable and would reduce carbon dioxide emissions and energy dependency on foreign sources. In conjunction with such growth are the enhanced opportunities for manufacturers, developers, and researchers to participate in this renewable energy sector. Ongoing research activities at the National Renewable Energy Laboratory and Sandia National Laboratories will continue to contribute to these opportunities. This paper focuses on describing the current research efforts at Sandia's wind energy department, which are primarily aimed at developing large rotors that are lighter, more reliable and produce more energy.
The significant growth in wind turbine installations in the past few years has fueled new scenarios that envision even larger expansion of U.S. wind electricity generation from the current 1.5% to 20% by 2030. Such goals are achievable and would reduce carbon dioxide emissions and energy dependency on foreign sources. In conjunction with such growth are the enhanced opportunities for manufacturers, developers, and researchers to participate in this renewable energy sector. Ongoing research activities at the National Renewable Energy Laboratory and Sandia National Laboratories will continue to contribute to these opportunities. This paper focuses on describing the current research efforts at Sandia's wind energy department, which are primarily aimed at developing large rotors that are lighter, more reliable and produce more energy.
Grid-based mesh generation methods have been available for many years and can provide a reliable method for meshing arbitrary geometries with hexahedral elements. The principal use for these methods has mostly been limited to biological-type models where topology that may incorporate sharp edges and curve definitions are not critical. While these applications have been effective, robust generation of hexahedral meshes on mechanical models, where the topology is typically of prime importance, impose difficulties that existing grid-based methods have not yet effectively addressed. This work introduces a set of procedures that can be used in resolving the features of a geometric model for grid-based hexahedral mesh generation for mechanical or topology-rich models.
The first approach-to-critical experiment in the Seven Percent Critical Experiment series was recently completed at Sandia. This experiment is part of the Seven Percent Critical Experiment which will provide new critical and reactor physics benchmarks for fuel enrichments greater than five weight percent. The inverse multiplication method was used to determine the state of the system during the course of the experiment. Using the inverse multiplication method, it was determined that the critical experiment went slightly supercritical with 1148 fuel elements in the fuel array. The experiment is described and the results of the experiment are presented.
Mappings from a master element to the physical mesh element, in conjunction with local metrics such as those appearing in the Target-matrix paradigm, are used to measure quality at points within an element. The approach is applied to both linear and quadratic triangular elements; this enables, for example, one to measure quality within a quadratic finite element. Quality within an element may also be measured on a set of symmetry points, leading to so-called symmetry metrics. An important issue having to do with the labeling of the element vertices is relevant to mesh quality tools such as Verdict and Mesquite. Certain quality measures like area, volume, and shape should be label-invariant, while others such as aspect ratio and orientation should not. It is shown that local metrics whose Jacobian matrix is non-constant are label-invariant only at the center of the element, while symmetry metrics can be label-invariant anywhere within the element, provided the reference element is properly restricted.
Nuclear nonproliferation efforts are supported by measurements that are capable of rapidly characterizing special nuclear materials (SNM). Neutron multiplicity counting is frequently used to estimate properties of SNM, including neutron source strength, multiplication, and generation time. Different classes of model have been used to estimate these and other properties from the measured neutron counting distribution and its statistics. This paper describes a technique to compute statistics of the neutron counting distribution using deterministic neutron transport models. This approach can be applied to rapidly and accurately analyze neutron multiplicity counting measurements.
Uncertainty quantification in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We develop a methodology that performs uncertainty quantification in this context in the presence of limited data.
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant feature and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.
This report summarizes the findings for phase one of the agent review and discusses the review methods and results. The phase one review identified a short list of agent systems that would prove most useful in the service architecture of an information management, analysis, and retrieval system. Reviewers evaluated open-source and commercial multi-agent systems and scored them based upon viability, uniqueness, ease of development, ease of deployment, and ease of integration with other products. Based on these criteria, reviewers identified the ten most appropriate systems. The report also mentions several systems that reviewers deemed noteworthy for the ideas they implement, even if those systems are not the best choices for information management purposes.
The Alternative Liquid Fuels Simulation Model (AltSim) is a high-level dynamic simulation model which calculates and compares the production and end use costs, greenhouse gas emissions, and energy balances of several alternative liquid transportation fuels. These fuels include: corn ethanol, cellulosic ethanol from various feedstocks (switchgrass, corn stover, forest residue, and farmed trees), biodiesel, and diesels derived from natural gas (gas to liquid, or GTL), coal (coal to liquid, or CTL), and coal with biomass (CBTL). AltSim allows for comprehensive sensitivity analyses on capital costs, operation and maintenance costs, renewable and fossil fuel feedstock costs, feedstock conversion ratio, financial assumptions, tax credits, CO{sub 2} taxes, and plant capacity factor. This paper summarizes the structure and methodology of AltSim, presents results, and provides a detailed sensitivity analysis. The Energy Independence and Security Act (EISA) of 2007 sets a goal for the increased use of biofuels in the U.S., ultimately reaching 36 billion gallons by 2022. AltSim's base case assumes EPA projected feedstock costs in 2022 (EPA, 2009). For the base case assumptions, AltSim estimates per gallon production costs for the five ethanol feedstocks (corn, switchgrass, corn stover, forest residue, and farmed trees) of $1.86, $2.32, $2.45, $1.52, and $1.91, respectively. The projected production cost of biodiesel is $1.81/gallon. The estimates for CTL without biomass range from $1.36 to $2.22. With biomass, the estimated costs increase, ranging from $2.19 per gallon for the CTL option with 8% biomass to $2.79 per gallon for the CTL option with 30% biomass and carbon capture and sequestration. AltSim compares the greenhouse gas emissions (GHG) associated with both the production and consumption of the various fuels. EISA allows fuels emitting 20% less greenhouse gases (GHG) than conventional gasoline and diesels to qualify as renewable fuels. This allows several of the CBTL options to be included under the EISA mandate. The estimated GHG emissions associated with the production of gasoline and diesel are 19.80 and 18.40 kg of CO{sub 2} equivalent per MMBtu (kgCO{sub 2}e/MMBtu), respectively (NETL, 2008). The estimated emissions are significantly higher for several alternatives: ethanol from corn (70.6), GTL (51.9), and CTL without biomass or sequestration (123-161). Projected emissions for several other alternatives are lower; integrating biomass and sequestration in the CTL processes can even result in negative net emissions. For example, CTL with 30% biomass and 91.5% sequestration has estimated production emissions of -38 kgCO{sub 2}e/MMBtu. AltSim also estimates the projected well-to-wheel, or lifecycle, emissions from consuming each of the various fuels. Vehicles fueled with conventional diesel or gasoline and driven 12,500 miles per year emit 5.72-5.93 tons of CO{sub 2} equivalents per year (tCO{sub 2}e/yr). Those emissions are significantly higher for vehicles fueled with 100% ethanol from corn (8.03 tCO{sub 2}e/yr) or diesel from CTL without sequestration (10.86 to 12.85 tCO{sub 2}/yr). Emissions could be significantly lower for vehicles fueled with diesel from CBTL with various shares of biomass. For example, for CTL with 30% biomass and carbon sequestration, emissions would be 2.21 tCO{sub 2}e per year, or just 39% of the emissions for a vehicle fueled with conventional diesel. While the results presented above provide very specific estimates for each option, AltSim's true potential is as a tool for educating policy makers and for exploring 'what if?' type questions. For example, AltSim allows one to consider the affect of various levels of carbon taxes on the production cost estimates, as well as increased costs to the end user on an annual basis. Other sections of AltSim allow the user to understand the implications of various polices in terms of costs to the government or land use requirements. AltSim's structure allows the end user to explore each of these alternatives and understand the sensitivities implications associated with each assumption as well as the implications for bottom line economics, energy use, and greenhouse gas emissions.
The objective of this work is to perform an uncertainty quantification (UQ) and model validation analysis of simulations of tests in the cross-wind test facility (XTF) at Sandia National Laboratories. In these tests, a calorimeter was subjected to a fire and the thermal response was measured via thermocouples. The UQ and validation analysis pertains to the experimental and predicted thermal response of the calorimeter. The calculations were performed using Sierra/Fuego/Syrinx/Calore, an Advanced Simulation and Computing (ASC) code capable of predicting object thermal response to a fire environment. Based on the validation results at eight diversely representative TC locations on the calorimeter the predicted calorimeter temperatures effectively bound the experimental temperatures. This post-validates Sandia's first integrated use of fire modeling with thermal response modeling and associated uncertainty estimates in an abnormal-thermal QMU analysis.
This report summarizes the strategy and preparations for the first phase in the pressurized water reactor (PWR) ignition experimental program. During this phase, a single full length, prototypic 17×17 PWR fuel assembly will simulate a severe loss-of-coolant-accident in the spent fuel pool whereby the fuel is completely uncovered and heats up until ignition of the cladding occurs. Electrically resistive heaters with zircaloy cladding will substitute for the spent nuclear fuel. The assembly will be placed in a single pool cell with the outer wall well insulated. This boundary condition will imitate the situation of an assembly surrounded by assemblies of similar offload age.
The relationship between explosive yield and seismic magnitude has been extensively studied for underground nuclear tests larger than about 1 kt. For monitoring smaller tests over local ranges (within 200 km), we need to know whether the available formulas can be extrapolated to much lower yields. Here, we review published information on amplitude decay with distance, and on the seismic magnitudes of industrial blasts and refraction explosions in the western U. S. Next we measure the magnitudes of some similar shots in the northeast. We find that local magnitudes ML of small, contained explosions are reasonably consistent with the magnitude-yield formulas developed for nuclear tests. These results are useful for estimating the detection performance of proposed local seismic networks.
In many parts of the United States, as well as other regions of the world, competing demands for fresh water or water suitable for desalination are outstripping sustainable supplies. In these areas, new water supplies are necessary to sustain economic development and agricultural uses, as well as support expanding populations, particularly in the Southwestern United States. Increasing the supply of water will more than likely come through desalinization of water reservoirs that are not suitable for present use. Surface-deployed seismic and electromagnetic (EM) methods have the potential for addressing these critical issues within large volumes of an aquifer at a lower cost than drilling and sampling. However, for detailed analysis of the water quality, some sampling utilizing boreholes would be required with geophysical methods being employed to extrapolate these sampled results to non-sampled regions of the aquifer. The research in this report addresses using seismic and EM methods in two complimentary ways to aid in the identification of water reservoirs that are suitable for desalinization. The first method uses the seismic data to constrain the earth structure so that detailed EM modeling can estimate the pore water conductivity, and hence the salinity. The second method utilizes the coupling of seismic and EM waves through the seismo-electric (conversion of seismic energy to electrical energy) and the electro-seismic (conversion of electrical energy to seismic energy) to estimate the salinity of the target aquifer. Analytic 1D solutions to coupled pressure and electric wave propagation demonstrate the types of waves one expects when using a seismic or electric source. A 2D seismo-electric/electro-seismic is developed to demonstrate the coupled seismic and EM system. For finite-difference modeling, the seismic and EM wave propagation algorithms are on different spatial and temporal scales. We present a method to solve multiple, finite-difference physics problems that has application beyond the present use. A limited field experiment was conducted to assess the seismo-electric effect. Due to a variety of problems, the observation of the electric field due to a seismic source is not definitive.
We investigate the potential for neutron generation using the 1 MeV RHEPP-1 intense pulsed ion beam facility at Sandia National Laboratories for a number of emerging applications. Among these are interrogation of cargo for detection of special nuclear materials (SNM). Ions from single-stage sources driven by pulsed power represent a potential source of significant neutron bursts. While a number of applications require higher ion energies (e.g. tens of MeV) than that provided by RHEPP-1, its ability to generate deuterium beams allow for neutron generation at and below 1 MeV. This report details the successful generation and characterization of deuterium ion beams, and their use in generating up to 3 x 10{sup 10} neutrons into 4{pi} per 5kA ion pulse.
This paper has three goals. The first is to review Shannon's theory of information and the subsequent advances leading to today's statistics-based text analysis algorithms, showing that the semantics of the text is neglected. The second goal is to propose an extension of Shannon's original model that can take into account semantics, where the 'semantics' of a message is understood in terms of the intended or actual changes on the recipient of a message. The third goal is to propose several lines of research that naturally fall out of the proposed model. Each computational approach to solving some problem rests on an underlying model or set of models that describe how key phenomena in the real world are represented and how they are manipulated. These models are both liberating and constraining. They are liberating in that they suggest a path of development for new tools and algorithms. They are constraining in that they intentionally ignore other potential paths of development. Modern statistical-based text analysis algorithms have a specific intellectual history and set of underlying models rooted in Shannon's theory of communication. For Shannon, language is treated as a stochastic generator of symbol sequences. Shannon himself, subsequently Weaver, and at least one of his predecessors are all explicit in their decision to exclude semantics from their models. This rejection of semantics as 'irrelevant to the engineering problem' is elegant and combined with developments particularly by Salton and subsequently by Latent Semantic Analysis, has led to a whole collection of powerful algorithms and an industry for data mining technologies. However, the kinds of problems currently facing us go beyond what can be accounted for by this stochastic model. Today's problems increasingly focus on the semantics of specific pieces of information. And although progress is being made with the old models, it seems natural to develop or extend information theory to account for semantics. By developing such theory, we can improve the quality of the next generation analytical tools. Far from being a mere intellectual curiosity, a new theory can provide the means for us to take into account information that has been to date ignored by the algorithms and technologies we develop. This paper will begin with an examination of Shannon's theory of communication, discussing the contributions and the limitations of the theory and how that theory gets expanded into today's statistical text analysis algorithms. Next, we will expand Shannon's model. We'll suggest a transactional definition of semantics that focuses on the intended and actual change that messages are intended to have on the recipient. Finally, we will examine implications of the model for algorithm development.
An initial version of a Systems Dynamics (SD) modeling framework was developed for the analysis of a broad range of energy technology and policy questions. The specific question selected to demonstrate this process was 'what would be the carbon and import implications of expanding nuclear electric capacity to provide power for plug in hybrid vehicles?' Fifteen SNL SD energy models were reviewed and the US Energy and Greenhouse gas model (USEGM) and the Global Nuclear Futures model (GEFM) were identified as the basis for an initial modeling framework. A basic U.S. Transportation model was created to model U.S. fleet changes. The results of the rapid adoption scenario result in almost 40% of light duty vehicles being PHEV by 2040 which requires about 37 GWy/y of additional electricity demand, equivalent to about 25 new 1.4 GWe nuclear plants. The adoption rate of PHEVs would likely be the controlling factor in achieving the associated reduction in carbon emissions and imports.
The density of molten nitrate salts was measured to determine the effects of the constituents on the density of multi-component mixtures. The molten salts consisted of various proportions of the nitrates of potassium, sodium, lithium and calcium. Density measurements ere performed using an Archimedean method and the results were compared to data reported in the literature for the individual constituent salts or simple combinations, such as the binary Solar Salt mixture of NaNO3 and KNO3. The addition of calcium nitrate generally ncreased density, relative to potassium nitrate or sodium nitrate, while lithium nitrate decreased density. The temperature dependence of density is described by a linear equation regardless of composition. The molar volume, and thereby, density of multi-component mixtures an be calculated as a function of temperature using a linear additivity rule based on the properties of the individual constituents.
This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.
Longcope Jr., Donald B.; Warren, Thomas L.; Duong, Henry
In this paper we develop an aft-body loading function for penetration simulations that is based on the spherical cavity-expansion approximation. This loading function assumes that there is a preexisting cavity of radius a{sub o} before the expansion occurs. This causes the radial stress on the cavity surface to be less than what is obtained if the cavity is opened from a zero initial radius. This in turn causes less resistance on the aft body as it penetrates the target which allows for greater rotation of the penetrator. Results from simulations are compared with experimental results for oblique penetration into a concrete target with an unconfined compressive strength of 23 MPa.
Biofouling, the unwanted growth of biofilms on a surface, of water-treatment membranes negatively impacts in desalination and water treatment. With biofouling there is a decrease in permeate production, degradation of permeate water quality, and an increase in energy expenditure due to increased cross-flow pressure needed. To date, a universal successful and cost-effect method for controlling biofouling has not been implemented. The overall goal of the work described in this report was to use high-performance computing to direct polymer, material, and biological research to create the next generation of water-treatment membranes. Both physical (micromixers - UV-curable epoxy traces printed on the surface of a water-treatment membrane that promote chaotic mixing) and chemical (quaternary ammonium groups) modifications of the membranes for the purpose of increasing resistance to biofouling were evaluated. Creation of low-cost, efficient water-treatment membranes helps assure the availability of fresh water for human use, a growing need in both the U. S. and the world.