Publications

Results 70201–70400 of 96,771

Search results

Jump to search filters

Fluorescence measurements for evaluating the application of multivariate analysis techniques to optically thick environments

Reichardt, Thomas A.; Schmitt, Randal L.; Sickafoose, Shane S.; Jones, Howland D.; Timlin, Jerilyn A.

Laser-induced fluorescence measurements of cuvette-contained laser dye mixtures are made for evaluation of multivariate analysis techniques to optically thick environments. Nine mixtures of Coumarin 500 and Rhodamine 610 are analyzed, as well as the pure dyes. For each sample, the cuvette is positioned on a two-axis translation stage to allow the interrogation at different spatial locations, allowing the examination of both primary (absorption of the laser light) and secondary (absorption of the fluorescence) inner filter effects. In addition to these expected inner filter effects, we find evidence that a portion of the absorbed fluorescence is re-emitted. A total of 688 spectra are acquired for the evaluation of multivariate analysis approaches to account for nonlinear effects.

More Details

Challenges in structural analysis for deformed nuclear reactivity assessments

Villa, Daniel V.; Tallman, Tyler N.

Launch safety calculations for past space reactor concepts have usually been limited to immersion of the reactor in water and/or sand, using nominal system geometries or in some cases simplified compaction scenarios. Deformation of the reactor core by impact during the accident sequence typically has not been considered because of the complexity of the calculation. Recent advances in codes and computing power have made such calculations feasible. The accuracy of such calculations depends primarily on the underlying structural analysis. Even though explicit structural dynamics is a mature field, nuclear reactors present significant challenges to obtain accurate deformation predictions. The presence of a working fluid is one of the primary contributors to challenges in these predictions. The fluid-structure interaction cannot be neglected because the fluid surrounds the nuclear fuel which is the most important region in the analysis. A detailed model of a small eighty-five pin reactor was built with the working fluid modeled as smoothed particle hydrodynamic (SPH) elements. Filling the complex volume covered by the working fluid with SPH elements required development of an algorithm which eliminates overlaps between hexahedral and SPH elements. The results with and without the working fluid were found to be considerably different with respect to reactivity predictions.

More Details

Age-aware solder performance models : level 2 milestone completion

Holm, Elizabeth A.; Neilsen, Michael K.; Vianco, Paul T.; Neidigk, Matthew N.

Legislated requirements and industry standards are replacing eutectic lead-tin (Pb-Sn) solders with lead-free (Pb-free) solders in future component designs and in replacements and retrofits. Since Pb-free solders have not yet seen service for long periods, their long-term behavior is poorly characterized. Because understanding the reliability of Pb-free solders is critical to supporting the next generation of circuit board designs, it is imperative that we develop, validate and exercise a solder lifetime model that can capture the thermomechanical response of Pb-free solder joints in stockpile components. To this end, an ASC Level 2 milestone was identified for fiscal year 2010: Milestone 3605: Utilize experimentally validated constitutive model for lead-free solder to simulate aging and reliability of solder joints in stockpile components. This report documents the completion of this milestone, including evidence that the milestone completion criteria were met and a summary of the milestone Program Review.

More Details

The integration of process monitoring for safeguards

Cipiti, Benjamin B.; Zinaman, Owen R.

The Separations and Safeguards Performance Model is a reprocessing plant model that has been developed for safeguards analyses of future plant designs. The model has been modified to integrate bulk process monitoring data with traditional plutonium inventory balances to evaluate potential advanced safeguards systems. Taking advantage of the wealth of operator data such as flow rates and mass balances of bulk material, the timeliness of detection of material loss was shown to improve considerably. Four diversion cases were tested including both abrupt and protracted diversions at early and late times in the run. The first three cases indicated alarms before half of a significant quantity of material was removed. The buildup of error over time prevented detection in the case of a protracted diversion late in the run. Some issues related to the alarm conditions and bias correction will need to be addressed in future work. This work both demonstrates the use of the model for performing diversion scenario analyses and for testing advanced safeguards system designs.

More Details

Energy balance in peridynamics

Silling, Stewart A.; Lehoucq, Richard B.

The peridynamic model of solid mechanics treats internal forces within a continuum through interactions across finite distances. These forces are determined through a constitutive model that, in the case of an elastic material, permits the strain energy density at a point to depend on the collective deformation of all the material within some finite distance of it. The forces between points are evaluated from the Frechet derivative of this strain energy density with respect to the deformation map. The resulting equation of motion is an integro-differential equation written in terms of these interparticle forces, rather than the traditional stress tensor field. Recent work on peridynamics has elucidated the energy balance in the presence of these long-range forces. We have derived the appropriate analogue of stress power, called absorbed power, that leads to a satisfactory definition of internal energy. This internal energy is additive, allowing us to meaningfully define an internal energy density field in the body. An expression for the local first law of thermodynamics within peridynamics combines this mechanical component, the absorbed power, with heat transport. The global statement of the energy balance over a subregion can be expressed in a form in which the mechanical and thermal terms contain only interactions between the interior of the subregion and the exterior, in a form anticipated by Noll in 1955. The local form of this first law within peridynamics, coupled with the second law as expressed in the Clausius-Duhem inequality, is amenable to the Coleman-Noll procedure for deriving restrictions on the constitutive model for thermomechanical response. Using an idea suggested by Fried in the context of systems of discrete particles, this procedure leads to a dissipation inequality for peridynamics that has a surprising form. It also leads to a thermodynamically consistent way to treat damage within the theory, shedding light on how damage, including the nucleation and advance of cracks, should be incorporated into a constitutive model.

More Details

Oxy-combustion of pulverized coal : modeling of char-combustion kinetics

Geier, M.; Shaddix, Christopher R.

In this study, char combustion of pulverized coal under oxy-fuel combustion conditions was investigated on the basis of experimentally observed temperature-size characteristics and corresponding predictions of numerical simulations. Using a combustion-driven entrained flow reactor equipped with an optical particle-sizing pyrometer, combustion characteristics (particle temperatures and apparent size) of pulverized coal char particles was determined for combustion in both reduced oxygen and oxygen-enriched atmospheres with either a N{sub 2} or CO{sub 2} bath gas. The two coals investigated were a low-sulfur, high-volatile bituminous coal (Utah Skyline) and a low-sulfur subbituminous coal (North Antelope), both size-classified to 75-106 {micro}m. A particular focus of this study lies in the analysis of the predictive modeling capabilities of simplified models that capture char combustion characteristics but exhibit the lowest possible complexity and thus facilitate incorporation in existing computational fluid dynamics (CFD) simulation codes. For this purpose, char consumption characteristics were calculated for char particles in the size range 10-200 {micro}m using (1) single-film, apparent kinetic models with a chemically 'frozen' boundary layer, and (2) a reacting porous particle model with detailed gas-phase kinetics and three separate heterogeneous reaction mechanisms of char-oxidation and gasification. A comparison of model results with experimental data suggests that single-film models with reaction orders between 0.5 and 1 with respect to the surface oxygen partial pressure may be capable of adequately predicting the temperature-size characteristics of char consumption, provided heterogeneous (steam and CO{sub 2}) gasification reactions are accounted for.

More Details

Risk-informed separation distances for hydrogen gas storage facilities

Keller, Jay O.; Ruggles, Adam J.; Dedrick, Daniel E.; Moen, Christopher D.; Evans, Gregory H.; LaChance, Jeffrey L.; Winters, William S.; Houf, William G.; Zhang, Jiayao Z.

The use of risk information in establishing code and standard requirements enables: (1) An adequate and appropriate level of safety; and (2) Deployment of hydrogen facilities are as safe as gasoline facilities. This effort provides a template for clear and defensible regulations, codes, and standards that can enable international market transformation.

More Details

A threat-based definition of IA and IA-enabled products

Shakamuri, Mayuri S.

This paper proposes a definition of 'IA and IA-enabled products' based on threat, as opposed to 'security services' (i.e., 'confidentiality, authentication, integrity, access control or non-repudiation of data'), as provided by Department of Defense (DoD) Instruction 8500.2, 'Information Assurance (IA) Implementation.' The DoDI 8500.2 definition is too broad, making it difficult to distinguish products that need higher protection from those that do not. As a consequence the products that need higher protection do not receive it, increasing risk. The threat-based definition proposed in this paper solves those problems by focusing attention on threats, thereby moving beyond compliance to risk management. (DoDI 8500.2 provides the definitions and controls that form the basis for IA across the DoD.) Familiarity with 8500.2 is assumed.

More Details

Challenges in simulation automation and archival

Blacker, Ted D.

The challenges of simulation streamlining and automation continue. The need for analysis verification, reviews, quality assurance, pedigree, and archiving are strong. These automation and archival needs can alternate between competing and complementing when determining how to improve the analysis environment and process. The needs compete for priority, resource allocation, and business practice importance. Likewise, implementation strategies of both automation and archival can swing between rather local work groups to more global corporate initiatives. Questions abound about needed connectivity (and the extent of this connectivity) to various CAD systems, product data management (PDM) systems, test data repositories and various information management implementations. This is a complex set of constraints. This presentation will bring focus to this complex environment through sharing experiences. The experiences are those gleaned over years of effort at Sandia to make reasonable sense out of the decisions to be made. It will include a discussion of integration and development of home grown tools for both automation and archival. It will also include an overview of efforts to understand local requirements, compare in-house tools to commercial offerings against those requirements, and options for future progress. Hopefully, sharing this rich set of experiences may prove useful to others struggling to make progress in their own environments.

More Details

Kernel-based Linux emulation for Plan 9

Minnich, Ronald G.

CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

More Details

Oxy-combustion of pulverized coal : modeling of char combustion kinetics

Geier, M.; Shaddix, Christopher R.

In this study, char combustion of pulverized coal under oxy-fuel combustion conditions was investigated on the basis of experimentally observed temperature-size characteristics and corresponding predictions of numerical simulations. Using a combustion-driven entrained flow reactor equipped with an optical particle-sizing pyrometer, combustion characteristics (particle temperatures and apparent size) of pulverized coal char particles was determined for combustion in both reduced oxygen and oxygen-enriched atmospheres with either a N{sub 2} or CO{sub 2} bath gas. The two coals investigated were a low-sulfur, high-volatile bituminous coal (Utah Skyline) and a low-sulfur subbituminous coal (North Antelope), both size-classified to 75-106 {micro}m. A particular focus of this study lies in the analysis of the predictive modeling capabilities of simplified models that capture char combustion characteristics but exhibit the lowest possible complexity and thus facilitate incorporation in existing computational fluid dynamics (CFD) simulation codes. For this purpose, char consumption characteristics were calculated for char particles in the size range 10-200 {micro}m using (1) single-film, apparent kinetic models with a chemically 'frozen' boundary layer, and (2) a reacting porous particle model with detailed gas-phase kinetics and three separate heterogeneous reaction mechanisms of char-oxidation and gasification. A comparison of model results with experimental data suggests that single-film models with reaction orders between 0.5 and 1 with respect to the surface oxygen partial pressure may be capable of adequately predicting the temperature-size characteristics of char consumption, provided heterogeneous (steam and CO{sub 2}) gasification reactions are accounted for.

More Details

Determining the Bayesian optimal sampling strategy in a hierarchical system

Boggs, Paul T.; Pebay, Philippe P.; Ringland, James T.

Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

More Details

The generation of shared cryptographic keys through channel impulse response estimation at 60 GHz

Forman, Michael F.; Young, Derek Y.

Methods to generate private keys based on wireless channel characteristics have been proposed as an alternative to standard key-management schemes. In this work, we discuss past work in the field and offer a generalized scheme for the generation of private keys using uncorrelated channels in multiple domains. Proposed cognitive enhancements measure channel characteristics, to dynamically change transmission and reception parameters as well as estimate private key randomness and expiration times. Finally, results are presented on the implementation of a system for the generation of private keys for cryptographic communications using channel impulse-response estimation at 60 GHz. The testbed is composed of commercial millimeter-wave VubIQ transceivers, laboratory equipment, and software implemented in MATLAB. Novel cognitive enhancements are demonstrated, using channel estimation to dynamically change system parameters and estimate cryptographic key strength. We show for a complex channel that secret key generation can be accomplished on the order of 100 kb/s.

More Details

Meandered-line antenna with integrated high-impedance surface

Forman, Michael F.

A reduced-volume antenna composed of a meandered-line dipole antenna over a finite-width, high-impedance surface is presented. The structure is novel in that the high-impedance surface is implemented with four Sievenpiper via-mushroom unit cells, whose area is optimized to match the meandered-line dipole antenna. The result is an antenna similar in performance to patch antenna but one fourth the area that can be deployed directly on the surface of a conductor. Simulations demonstrate a 3.5 cm ({lambda}/4) square antenna with a bandwidth of 4% and a gain of 4.8 dBi at 2.5 GHz.

More Details

Data intensive computing at Sandia

Wilson, Andrew T.

Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.

More Details

Verifiable process monitoring through enhanced data authentication

Ross, Troy R.; Schoeneman, Barry D.; Baldwin, George T.

To ensure the peaceful intent for production and processing of nuclear fuel, verifiable process monitoring of the fuel production cycle is required. As part of a U.S. Department of Energy (DOE)-EURATOM collaboration in the field of international nuclear safeguards, the DOE Sandia National Laboratories (SNL), the European Commission Joint Research Centre (JRC) and Directorate General-Energy (DG-ENER) developed and demonstrated a new concept in process monitoring, enabling the use of operator process information by branching a second, authenticated data stream to the Safeguards inspectorate. This information would be complementary to independent safeguards data, improving the understanding of the plant's operation. The concept is called the Enhanced Data Authentication System (EDAS). EDAS transparently captures, authenticates, and encrypts communication data that is transmitted between operator control computers and connected analytical equipment utilized in nuclear processes controls. The intent is to capture information as close to the sensor point as possible to assure the highest possible confidence in the branched data. Data must be collected transparently by the EDAS: Operator processes should not be altered or disrupted by the insertion of the EDAS as a monitoring system for safeguards. EDAS employs public key authentication providing 'jointly verifiable' data and private key encryption for confidentiality. Timestamps and data source are also added to the collected data for analysis. The core of the system hardware is in a security enclosure with both active and passive tamper indication. Further, the system has the ability to monitor seals or other security devices in close proximity. This paper will discuss the EDAS concept, recent technical developments, intended application philosophy and the planned future progression of this system.

More Details

Secure Video Surveillance System (SVSS) for unannounced safeguards inspections

Pinkalla, Mark P.

The Secure Video Surveillance System (SVSS) is a collaborative effort between the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC). The joint project addresses specific requirements of redundant surveillance systems installed in two South American nuclear facilities as a tool to support unannounced inspections conducted by ABACC and the International Atomic Energy Agency (IAEA). The surveillance covers the critical time (as much as a few hours) between the notification of an inspection and the access of inspectors to the location in facility where surveillance equipment is installed. ABACC and the IAEA currently use the EURATOM Multiple Optical Surveillance System (EMOSS). This outdated system is no longer available or supported by the manufacturer. The current EMOSS system has met the project objective; however, the lack of available replacement parts and system support has made this system unsustainable and has increased the risk of an inoperable system. A new system that utilizes current technology and is maintainable is required to replace the aging EMOSS system. ABACC intends to replace one of the existing ABACC EMOSS systems by the Secure Video Surveillance System. SVSS utilizes commercial off-the shelf (COTS) technologies for all individual components. Sandia National Laboratories supported the system design for SVSS to meet Safeguards requirements, i.e. tamper indication, data authentication, etc. The SVSS consists of two video surveillance cameras linked securely to a data collection unit. The collection unit is capable of retaining historical surveillance data for at least three hours with picture intervals as short as 1sec. Images in .jpg format are available to inspectors using various software review tools. SNL has delivered two SVSS systems for test and evaluation at the ABACC Safeguards Laboratory. An additional 'proto-type' system remains at SNL for software and hardware testing. This paper will describe the capabilities of the new surveillance system, application and requirements, and the design approach.

More Details

Xyce parallel electronic simulator design

Keiter, Eric R.; Russo, Thomas V.; Schiek, Richard S.; Thornquist, Heidi K.; Mei, Ting M.

This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

More Details

LDRD 149045 final report distinguishing documents

Mitchell, Scott A.

This LDRD 149045 final report describes work that Sandians Scott A. Mitchell, Randall Laviolette, Shawn Martin, Warren Davis, Cindy Philips and Danny Dunlavy performed in 2010. Prof. Afra Zomorodian provided insight. This was a small late-start LDRD. Several other ongoing efforts were leveraged, including the Networks Grand Challenge LDRD, and the Computational Topology CSRF project, and the some of the leveraged work is described here. We proposed a sentence mining technique that exploited both the distribution and the order of parts-of-speech (POS) in sentences in English language documents. The ultimate goal was to be able to discover 'call-to-action' framing documents hidden within a corpus of mostly expository documents, even if the documents were all on the same topic and used the same vocabulary. Using POS was novel. We also took a novel approach to analyzing POS. We used the hypothesis that English follows a dynamical system and the POS are trajectories from one state to another. We analyzed the sequences of POS using support vector machines and the cycles of POS using computational homology. We discovered that the POS were a very weak signal and did not support our hypothesis well. Our original goal appeared to be unobtainable with our original approach. We turned our attention to study an aspect of a more traditional approach to distinguishing documents. Latent Dirichlet Allocation (LDA) turns documents into bags-of-words then into mixture-model points. A distance function is used to cluster groups of points to discover relatedness between documents. We performed a geometric and algebraic analysis of the most popular distance functions and made some significant and surprising discoveries, described in a separate technical report.

More Details

Automatic recognition of malicious intent indicators

Koch, Mark W.; Nguyen, Hung D.; Giron, Casey; Yee, Mark L.; Drescher, Steven M.

A major goal of next-generation physical protection systems is to extend defenses far beyond the usual outer-perimeter-fence boundaries surrounding protected facilities. Mitigation of nuisance alarms is among the highest priorities. A solution to this problem is to create a robust capability to Automatically Recognize Malicious Indicators of intruders. In extended defense applications, it is not enough to distinguish humans from all other potential alarm sources as human activity can be a common occurrence outside perimeter boundaries. Our approach is unique in that it employs a stimulus to determine a malicious intent indicator for the intruder. The intruder's response to the stimulus can be used in an automatic reasoning system to decide the intruder's intent.

More Details

Grid-tied PV battery systems

Hund, Thomas D.; Gonzalez, Sigifredo G.

Grid tied PV energy smoothing was implemented by using a valve regulated lead-acid (VRLA) battery as a temporary energy storage device to both charge and discharge as required to smooth the inverter energy output from the PV array. Inverter output was controlled by the average solar irradiance over the previous 1h time interval. On a clear day the solar irradiance power curve is offset by about 1h, while on a variable cloudy day the inverter output power curve will be smoothed based on the average solar irradiance. Test results demonstrate that this smoothing algorithm works very well. Battery state of charge was more difficult to manage because of the variable system inefficiencies. Testing continued for 30-days and established consistent operational performance for extended periods of time under a wide variety of resource conditions. Both battery technologies from Exide (Absolyte) and East Penn (ALABC Advanced) proved to cycle well at a Partial state of charge over the time interval tested.

More Details

The development and application of the Remotely Monitored Sealing Array (RMSA)

Schoeneman, Barry D.

Advanced sealing technologies are often an integral part of a containment surveillance (CS) approach to detect undeclared diversion of nuclear materials. As adversarial capabilities continue to advance, the sophistication of the seal design must advance as well. The intelligent integration of security concepts into a physical technology used to seal monitored items is a fundamental requirement for secure containment. Seals have a broad range of capabilities. These capabilities must be matched appropriately to the application to establish the greatest effectiveness from the seal. However, many current seal designs and their application fail to provide the high confidence of detection and timely notification that can be appreciated with new technology. Additionally, as monitoring needs rapidly expand, out-pacing budgets, remote monitoring of low-cost autonomous sealing technologies becomes increasingly appealing. The Remotely Monitored Sealing Array (RMSA) utilizes this technology and has implemented cost effective security concepts establishing the high confidence that is expected of active sealing technology today. RMSA is a system of relatively low-cost but secure active loop seals for the monitoring of nuclear material containers. The sealing mechanism is a fiber optic loop that is pulsed using a low-power LED circuit with a coded signal to verify integrity. Battery life is conserved by the use of sophisticated power management techniques, permitting many years of reliable operation without battery replacement or other maintenance. Individual seals communicate by radio using a secure transmission protocol using either of two specially designated communication frequency bands. Signals are encrypted and authenticated by private key, established during the installation procedure, and the seal bodies feature both active and passive tamper indication. Seals broadcast to a central 'translator' from which information is both stored locally and/or transmitted remotely for review. The system is especially appropriate for nuclear material storage facilities, indoor or outdoor, enabling remote inspection of status rather than tedious individual seal verification, and without the need for interconnected cabling. A handheld seal verifier is also available for an inspector to verify any particular individual seal in close proximity. This paper will discuss the development of the RMSA sealing system, its capabilities, its application philosophy, and projected future trends.

More Details

Entrepreneurial separation to transfer technology

Fairbanks, Richard R.

Entrepreneurial separation to transfer technology (ESTT) program is that entrepreneurs terminate their employment with Sandia. The term of the separation is two years with the option to request a third year. Entrepreneurs are guaranteed reinstatement by Sandia if they return before ESTT expiration. Participants may start up or helpe expand technology businesses.

More Details

Predicting fracture in micron-scale polycrystalline silicon MEMS structures

Boyce, Brad B.; Foulk, James W.; Field, Richard V.; Ohlhausen, J.A.

Designing reliable MEMS structures presents numerous challenges. Polycrystalline silicon fractures in a brittle manner with considerable variability in measured strength. Furthermore, it is not clear how to use a measured tensile strength distribution to predict the strength of a complex MEMS structure. To address such issues, two recently developed high throughput MEMS tensile test techniques have been used to measure strength distribution tails. The measured tensile strength distributions enable the definition of a threshold strength as well as an inferred maximum flaw size. The nature of strength-controlling flaws has been identified and sources of the observed variation in strength investigated. A double edge-notched specimen geometry was also tested to study the effect of a severe, micron-scale stress concentration on the measured strength distribution. Strength-based, Weibull-based, and fracture mechanics-based failure analyses were performed and compared with the experimental results.

More Details

Micro-optics for imaging

Boye, Robert B.

This project investigates the fundamental imaging capability of an optic with a physical thickness substantially less than 1 mm. The analysis assumes that post-processing can overcome certain restrictions such as detector pixel size and image degradation due to aberrations. A first order optical analysis quickly reveals the limitations of even an ideal thin lens to provide sufficient image resolution and provides the justification for pursuing an annular design. Some straightforward examples clearly show the potential of this approach. The tradeoffs associated with annular designs, specifically field of view limitations and reduced mid-level spatial frequencies, are discussed and their impact on the imaging performance evaluated using several imaging examples. Additionally, issues such as detector acceptance angle and the need to balance aberrations with resolution are included in the analysis. With these restrictions, the final results present an excellent approximation of the expected performance of the lens designs presented.

More Details

FISH 'N' Chips : a single cell genomic analyzer for the human microbiome

Meagher, Robert M.; Patel, Kamlesh P.; Light, Yooli K.; Liu, Peng L.; Singh, Anup K.

Uncultivable microorganisms likely play significant roles in the ecology within the human body, with subtle but important implications for human health. Focusing on the oral microbiome, we are developing a processor for targeted isolation of individual microbial cells, facilitating whole-genome analysis without the need for isolation of pure cultures. The processor consists of three microfluidic modules: identification based on 16S rRNA fluorescence in situ hybridization (FISH), fluorescence-based sorting, and encapsulation of individual selected cells into small droplets for whole genome amplification. We present here a technique for performing microscale FISH and flow cytometry, as a prelude to single cell sorting.

More Details

Influence of orientation on the size effect in BCC pillars with different critical temperatures

Proposed for publication in Materials Science and Engineering A.

Clark, Blythe C.

The size effect in body-centered cubic metals is comprehensively investigated through micro/nano-compression tests performed on focused ion beam machined tungsten (W), molybdenum (Mo) and niobium (Nb) pillars, with single slip [2 3 5] and multiple slip [0 0 1] orientations. The results demonstrate that the stress-strain response is unaffected by the number of activated slip systems, indicating that dislocation-dislocation interaction is not a dominant mechanism for the observed diameter dependent yield strength and strain hardening. Furthermore, the limited mobility of screw dislocations, which is different for each material at ambient temperature, acts as an additional strengthening mechanism leading to a material dependent size effect. Nominal values and diameter dependence of the flow stress significantly deviate from studies on face-centered cubic metals. This is demonstrated by the correlation of size dependence with the material specific critical temperature. Activation volumes were found to decrease with decreasing pillar diameter further indicating that the influence of the screw dislocations decreases with smaller pillar diameter.

More Details

Assessment of methodologies for analysis of the dungeness B accidental aircraft crash risk

Hansen, Clifford H.; LaChance, Jeffrey L.

The Health and Safety Executive (HSE) has requested Sandia National Laboratories (SNL) to review the aircraft crash methodology for nuclear facilities that are being used in the United Kingdom (UK). The scope of the work included a review of one method utilized in the UK for assessing the potential for accidental airplane crashes into nuclear facilities (Task 1) and a comparison of the UK methodology against similar International Atomic Energy Agency (IAEA), United States (US) Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) methods (Task 2). Based on the conclusions from Tasks 1 and 2, an additional Task 3 would provide an assessment of a site-specific crash frequency for the Dungeness B facility using one of the other methodologies. This report documents the results of Task 2. The comparison of the different methods was performed for the three primary contributors to aircraft crash risk at the Dungeness B site: airfield related crashes, crashes below airways, and background crashes. The methods and data specified in each methodology were compared for each of these risk contributors, differences in the methodologies were identified, and the importance of these differences was qualitatively and quantitatively assessed. The bases for each of the methods and the data used were considered in this assessment process. A comparison of the treatment of the consequences of the aircraft crashes was not included in this assessment because the frequency of crashes into critical structures is currently low based on the existing Dungeness B assessment. Although the comparison found substantial differences between the UK and the three alternative methodologies (IAEA, NRC, and DOE) this assessment concludes that use of any of these alternative methodologies would not change the conclusions reached for the Dungeness B site. Performance of Task 3 is thus not recommended.

More Details

Use of metal organic fluors for spectral discrimination of neutrons and gammas

Allendorf, Mark D.; Feng, Patrick L.

A new method for spectral shape discrimination (SSD) of fast neutrons and gamma rays has been investigated. Gammas interfere with neutron detection, making efficient discrimination necessary for practical applications. Pulse shape discrimination (PSD) in liquid organic scintillators is currently the most effective means of gamma rejection. The hazardous liquids, restrictions on volume, and the need for fast timing are drawbacks to traditional PSD scintillators. In this project we investigated harvesting excited triplet states to increase scintillation yield and provide distinct spectral signatures for gammas and neutrons. Our novel approach relies on metal-organic phosphors to convert a portion of the energy normally lost to the scintillation process into useful luminescence with sub-microsecond lifetimes. The approach enables independent control over delayed luminescence wavelength, intensity, and timing for the first time. We demonstrated that organic scintillators, including plastics, nanoporous framework materials, and oil-based liquids can be engineered for both PSD and SSD.

More Details

Thermodynamic and kinetic characterization of H-D exchange in Pd and Pd alloys

Luo, Weifang L.

A Sieverts apparatus coupled with an RGA is an effective method to detect composition variations during isotopic exchange. This experimental setup provides a powerful tool for the thermodynamic and kinetic characterization of H-D isotope exchange on metals and alloys. H-D exchange behavior during absorption and desorption in the plateau region in Pd have been investigated and reported here. It was found that in the plateau region of H-D-Pd system the equilibrium pressures are between those of H2-Pd and D2-Pd for both absorption and desorption and the equilibrium pressures are higher when the fractions of D in the Pd are higher. Adding a dose of gas H2 (or D2) to Pd-D (or Pd-H) system results in releasing of gas D2 and HD (or H2 and HD) in {beta}-phase of Pd-D (or {beta}-phase of Pd-H), but this does not happen in the plateau region. The equilibrium constants have been determined during exchange and it was found that they agree well with the calculated values reported in literature. The separation factor {alpha} values during exchange have been measured and compared with the literature values. The exchange rates have been determined from the exchange profiles and a first order kinetic model for the exchange of H-D-Pd systems has been employed for the analysis. The exchange activation energies for both directions, H2+PdD and D2+PdH, have been determined.

More Details

Computational and experimental platform for understanding and optimizing water flux and salt rejection in nanoporous membranes

Rogers, David M.; Leung, Kevin L.; Brinker, C.J.; Singh, Seema S.; Merson, John A.

Affordable clean water is both a global and a national security issue as lack of it can cause death, disease, and international tension. Furthermore, efficient water filtration reduces the demand for energy, another national issue. The best current solution to clean water lies in reverse osmosis (RO) membranes that remove salts from water with applied pressure, but widely used polymeric membrane technology is energy intensive and produces water depleted in useful electrolytes. Furthermore incremental improvements, based on engineering solutions rather than new materials, have yielded only modest gains in performance over the last 25 years. We have pursued a creative and innovative new approach to membrane design and development for cheap desalination membranes by approaching the problem at the molecular level of pore design. Our inspiration comes from natural biological channels, which permit faster water transport than current reverse osmosis membranes and selectively pass healthy ions. Aiming for an order-of-magnitude improvement over mature polymer technology carries significant inherent risks. The success of our fundamental research effort lies in our exploiting, extending, and integrating recent advances by our team in theory, modeling, nano-fabrication and platform development. A combined theoretical and experimental platform has been developed to understand the interplay between water flux and ion rejection in precisely-defined nano-channels. Our innovative functionalization of solid state nanoporous membranes with organic protein-mimetic polymers achieves 3-fold improvement in water flux over commercial RO membranes and has yielded a pending patent and industrial interest. Our success has generated useful contributions to energy storage, nanoscience, and membrane technology research and development important for national health and prosperity.

More Details

Variance estimation for radiation analysis and multi-sensor fusion

Mitchell, Dean J.

Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.

More Details

Algorithm and exploratory study of the Hall MHD Rayleigh-Taylor instability

Gardiner, Thomas A.

This report is concerned with the influence of the Hall term on the nonlinear evolution of the Rayleigh-Taylor (RT) instability. This begins with a review of the magnetohydrodynamic (MHD) equations including the Hall term and the wave modes which are present in the system on time scales short enough that the plasma can be approximated as being stationary. In this limit one obtains what are known as the electron MHD (EMHD) equations which support two characteristic wave modes known as the whistler and Hall drift modes. Each of these modes is considered in some detail in order to draw attention to their key features. This analysis also serves to provide a background for testing the numerical algorithms used in this work. The numerical methods are briefly described and the EMHD solver is then tested for the evolution of whistler and Hall drift modes. These methods are then applied to study the nonlinear evolution of the MHD RT instability with and without the Hall term for two different configurations. The influence of the Hall term on the mixing and bubble growth rate are analyzed.

More Details

Solid oxide electrochemical reactor science

Stechel-Speicher, Ellen B.

Solid-oxide electrochemical cells are an exciting new technology. Development of solid-oxide cells (SOCs) has advanced considerable in recent years and continues to progress rapidly. This thesis studies several aspects of SOCs and contributes useful information to their continued development. This LDRD involved a collaboration between Sandia and the Colorado School of Mines (CSM) ins solid-oxide electrochemical reactors targeted at solid oxide electrolyzer cells (SOEC), which are the reverse of solid-oxide fuel cells (SOFC). SOECs complement Sandia's efforts in thermochemical production of alternative fuels. An SOEC technology would co-electrolyze carbon dioxide (CO{sub 2}) with steam at temperatures around 800 C to form synthesis gas (H{sub 2} and CO), which forms the building blocks for a petrochemical substitutes that can be used to power vehicles or in distributed energy platforms. The effort described here concentrates on research concerning catalytic chemistry, charge-transfer chemistry, and optimal cell-architecture. technical scope included computational modeling, materials development, and experimental evaluation. The project engaged the Colorado Fuel Cell Center at CSM through the support of a graduate student (Connor Moyer) at CSM and his advisors (Profs. Robert Kee and Neal Sullivan) in collaboration with Sandia.

More Details

Simulations of neutron multiplicity measurements with MCNP-PoliMi

Miller, Eric C.; Mattingly, John K.

The heightened focus on nuclear safeguards and accountability has increased the need to develop and verify simulation tools for modeling these applications. The ability to accurately simulate safeguards techniques, such as neutron multiplicity counting, aids in the design and development of future systems. This work focuses on validating the ability of the Monte Carlo code MCNPX-PoliMi to reproduce measured neutron multiplicity results for a highly multiplicative sample. The benchmark experiment for this validation consists of a 4.5-kg sphere of plutonium metal that was moderated by various thicknesses of polyethylene. The detector system was the nPod, which contains a bank of 15 3He detectors. Simulations of the experiments were compared to the actual measurements and several sources of potential bias in the simulation were evaluated. The analysis included the effects of detector dead time, source-detector distance, density, and adjustments made to the value of {nu}-bar in the data libraries. Based on this analysis it was observed that a 1.14% decrease in the evaluated value of {nu}-bar for 239Pu in the ENDF-VII library substantially improved the accuracy of the simulation.

More Details

Enabling R&D for accurate simulation of non-ideal explosives

Thompson, Aidan P.; Aidun, John B.; Schmitt, Robert G.

We implemented two numerical simulation capabilities essential to reliably predicting the effect of non-ideal explosives (NXs). To begin to be able to treat the multiple, competing, multi-step reaction paths and slower kinetics of NXs, Sandia's CTH shock physics code was extended to include the TIGER thermochemical equilibrium solver as an in-line routine. To facilitate efficient exploration of reaction pathways that need to be identified for the CTH simulations, we implemented in Sandia's LAMMPS molecular dynamics code the MSST method, which is a reactive molecular dynamics technique for simulating steady shock wave response. Our preliminary demonstrations of these two capabilities serve several purposes: (i) they demonstrate proof-of-principle for our approach; (ii) they provide illustration of the applicability of the new functionality; and (iii) they begin to characterize the use of the new functionality and identify where improvements will be needed for the ultimate capability to meet national security needs. Next steps are discussed.

More Details

Design considerations for concentrating solar power tower systems employing molten salt

Moore, Robert C.; Vernon, Milton E.; Ho, Clifford K.; Siegel, Nathan P.; Kolb, Gregory J.

The Solar Two Project was a United States Department of Energy sponsored project operated from 1996 to 1999 to demonstrate the coupling of a solar power tower with a molten nitrate salt as a heat transfer media and for thermal storage. Over all, the Solar Two Project was very successful; however many operational challenges were encountered. In this work, the major problems encountered in operation of the Solar Two facility were evaluated and alternative technologies identified for use in a future solar power tower operating with a steam Rankine power cycle. Many of the major problems encountered can be addressed with new technologies that were not available a decade ago. These new technologies include better thermal insulation, analytical equipment, pumps and values specifically designed for molten nitrate salts, and gaskets resistant to thermal cycling and advanced equipment designs.

More Details

Initiation of the TLR4 signal transduction network : deeper understanding for better therapeutics

Kent, Michael S.; Branda, Steven B.; Hayden, Carl C.; Sasaki, Darryl Y.; Sale, Kenneth L.

The innate immune system represents our first line of defense against microbial pathogens, and in many cases is activated by recognition of pathogen cellular components (dsRNA, flagella, LPS, etc.) by cell surface membrane proteins known as toll-like receptors (TLRs). As the initial trigger for innate immune response activation, TLRs also represent a means by which we can effectively control or modulate inflammatory responses. This proposal focused on TLR4, which is the cell-surface receptor primarily responsible for initiating the innate immune response to lipopolysaccharide (LPS), a major component of the outer membrane envelope of gram-negative bacteria. The goal was to better understand TLR4 activation and associated membrane proximal events, in order to enhance the design of small molecule therapeutics to modulate immune activation. Our approach was to reconstitute the receptor in biomimetic systems in-vitro to allow study of the structure and dynamics with biophysical methods. Structural studies were initiated in the first year but were halted after the crystal structure of the dimerized receptor was published early in the second year of the program. Methods were developed to determine the association constant for oligomerization of the soluble receptor. LPS-induced oligomerization was observed to be a strong function of buffer conditions. In 20 mM Tris pH 8.0 with 200 mM NaCl, the onset of receptor oligomerization occurred at 0.2 uM TLR4/MD2 with E coli LPS Ra mutant in excess. However, in the presence of 0.5 uM CD14 and 0.5 uM LBP, the onset of receptor oligomerization was observed to be less than 10 nM TLR4/MD2. Several methods were pursued to study LPS-induced oligomerization of the membrane-bound receptor, including CryoEM, FRET, colocalization and codiffusion followed by TIRF, and fluorescence correlation spectroscopy. However, there approaches met with only limited success.

More Details

Modeling attacker-defender interactions in information networks

Collins, Michael J.

The simplest conceptual model of cybersecurity implicitly views attackers and defenders as acting in isolation from one another: an attacker seeks to penetrate or disrupt a system that has been protected to a given level, while a defender attempts to thwart particular attacks. Such a model also views all non-malicious parties as having the same goal of preventing all attacks. But in fact, attackers and defenders are interacting parts of the same system, and different defenders have their own individual interests: defenders may be willing to accept some risk of successful attack if the cost of defense is too high. We have used game theory to develop models of how non-cooperative but non-malicious players in a network interact when there is a substantial cost associated with effective defensive measures. Although game theory has been applied in this area before, we have introduced some novel aspects of player behavior in our work, including: (1) A model of how players attempt to avoid the costs of defense and force others to assume these costs; (2) A model of how players interact when the cost of defending one node can be shared by other nodes; and (3) A model of the incentives for a defender to choose less expensive, but less effective, defensive actions.

More Details

Improved high temperature solar absorbers for use in Concentrating Solar Power central receiver applications

Staiger, Chad S.; Lambert, Timothy N.; Hall, Aaron C.; Bencomo, Marlene B.; Stechel-Speicher, Ellen B.

Concentrating solar power (CSP) systems use solar absorbers to convert the heat from sunlight to electric power. Increased operating temperatures are necessary to lower the cost of solar-generated electricity by improving efficiencies and reducing thermal energy storage costs. Durable new materials are needed to cope with operating temperatures >600 C. The current coating technology (Pyromark High Temperature paint) has a solar absorptance in excess of 0.95 but a thermal emittance greater than 0.8, which results in large thermal losses at high temperatures. In addition, because solar receivers operate in air, these coatings have long term stability issues that add to the operating costs of CSP facilities. Ideal absorbers must have high solar absorptance (>0.95) and low thermal emittance (<0.05) in the IR region, be stable in air, and be low-cost and readily manufacturable. We propose to utilize solution-based synthesis techniques to prepare intrinsic absorbers for use in central receiver applications.

More Details

Nanopatterned ferroelectrics for ultrahigh density rad-hard nonvolatile memories

Brennecka, Geoffrey L.; Stevens, Jeffrey S.; Gin, Aaron G.; Scrymgeour, David S.

Radiation hard nonvolatile random access memory (NVRAM) is a crucial component for DOE and DOD surveillance and defense applications. NVRAMs based upon ferroelectric materials (also known as FERAMs) are proven to work in radiation-rich environments and inherently require less power than many other NVRAM technologies. However, fabrication and integration challenges have led to state-of-the-art FERAMs still being fabricated using a 130nm process while competing phase-change memory (PRAM) has been demonstrated with a 20nm process. Use of block copolymer lithography is a promising approach to patterning at the sub-32nm scale, but is currently limited to self-assembly directly on Si or SiO{sub 2} layers. Successful integration of ferroelectrics with discrete and addressable features of {approx}15-20nm would represent a 100-fold improvement in areal memory density and would enable more highly integrated electronic devices required for systems advances. Towards this end, we have developed a technique that allows us to carry out block copolymer self-assembly directly on a huge variety of different materials and have investigated the fabrication, integration, and characterization of electroceramic materials - primarily focused on solution-derived ferroelectrics - with discrete features of {approx}20nm and below. Significant challenges remain before such techniques will be capable of fabricating fully integrated NVRAM devices, but the tools developed for this effort are already finding broader use. This report introduces the nanopatterned NVRAM device concept as a mechanism for motivating the subsequent studies, but the bulk of the document will focus on the platform and technology development.

More Details

Thermokinetic/mass-transfer analysis of carbon capture for reuse/sequestration

Brady, Patrick V.; Luketa, Anay L.; Stechel-Speicher, Ellen B.

Effective capture of atmospheric carbon is a key bottleneck preventing non bio-based, carbon-neutral production of synthetic liquid hydrocarbon fuels using CO{sub 2} as the carbon feedstock. Here we outline the boundary conditions of atmospheric carbon capture for recycle to liquid hydrocarbon fuels production and re-use options and we also identify the technical advances that must be made for such a process to become technically and commercially viable at scale. While conversion of atmospheric CO{sub 2} into a pure feedstock for hydrocarbon fuels synthesis is presently feasible at the bench-scale - albeit at high cost energetically and economically - the methods and materials needed to concentrate large amounts of CO{sub 2} at low cost and high efficiency remain technically immature. Industrial-scale capture must entail: (1) Processing of large volumes of air through an effective CO{sub 2} capture media and (2) Efficient separation of CO{sub 2} from the processed air flow into a pure stream of CO{sub 2}.

More Details

Development of efficient, integrated cellulosic biorefineries : LDRD final report

Shaddix, Christopher R.; Hecht, Ethan S.; Teh, Kwee-Yan T.; Buffleben, George M.; Dibble, Dean C.

Cellulosic ethanol, generated from lignocellulosic biomass sources such as grasses and trees, is a promising alternative to conventional starch- and sugar-based ethanol production in terms of potential production quantities, CO{sub 2} impact, and economic competitiveness. In addition, cellulosic ethanol can be generated (at least in principle) without competing with food production. However, approximately 1/3 of the lignocellulosic biomass material (including all of the lignin) cannot be converted to ethanol through biochemical means and must be extracted at some point in the biochemical process. In this project we gathered basic information on the prospects for utilizing this lignin residue material in thermochemical conversion processes to improve the overall energy efficiency or liquid fuel production capacity of cellulosic biorefineries. Two existing pretreatment approaches, soaking in aqueous ammonia (SAA) and the Arkenol (strong sulfuric acid) process, were implemented at Sandia and used to generated suitable quantities of residue material from corn stover and eucalyptus feedstocks for subsequent thermochemical research. A third, novel technique, using ionic liquids (IL) was investigated by Sandia researchers at the Joint Bioenergy Institute (JBEI), but was not successful in isolating sufficient lignin residue. Additional residue material for thermochemical research was supplied from the dilute-acid simultaneous saccharification/fermentation (SSF) pilot-scale process at the National Renewable Energy Laboratory (NREL). The high-temperature volatiles yields of the different residues were measured, as were the char combustion reactivities. The residue chars showed slightly lower reactivity than raw biomass char, except for the SSF residue, which had substantially lower reactivity. Exergy analysis was applied to the NREL standard process design model for thermochemical ethanol production and from a prototypical dedicated biochemical process, with process data supplied by a recent report from the National Research Council (NRC). The thermochemical system analysis revealed that most of the system inefficiency is associated with the gasification process and subsequent tar reforming step. For the biochemical process, the steam generation from residue combustion, providing the requisite heating for the conventional pretreatment and alcohol distillation processes, was shown to dominate the exergy loss. An overall energy balance with different potential distillation energy requirements shows that as much as 30% of the biomass energy content may be available in the future as a feedstock for thermochemical production of liquid fuels.

More Details

Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC)

Arguello, Jose G.; McNeish, Jerry M.; Schultz, Peter A.; Wang, Yifeng

This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

More Details

Influence of point defects on grain boundary motion

Foiles, Stephen M.

This work addresses the influence of point defects, in particular vacancies, on the motion of grain boundaries. If there is a non-equilibrium concentration of point defects in the vicinity of an interface, such as due to displacement cascades in a radiation environment, motion of the interface to sweep up the defects will lower the energy and provide a driving force for interface motion. Molecular dynamics simulations are employed to examine the process for the case of excess vacancy concentrations in the vicinity of two grain boundaries. It is observed that the efficacy of the presence of the point defects in inducing boundary motion depends on the balance of the mobility of the defects with the mobility of the interfaces. In addition, the extent to which grain boundaries are ideal sinks for vacancies is evaluated by considering the energy of boundaries before and after vacancy absorption.

More Details

Scaling of X pinches from 1 MA to 6 MA

Sinars, Daniel S.; McBride, Ryan D.; Wenger, D.F.; Cuneo, M.E.; Yu, Edmund Y.; Harding, Eric H.; Hansen, Stephanie B.; Ampleford, David A.; Jennings, Christopher A.

This final report for Project 117863 summarizes progress made toward understanding how X-pinch load designs scale to high currents. The X-pinch load geometry was conceived in 1982 as a method to study the formation and properties of bright x-ray spots in z-pinch plasmas. X-pinch plasmas driven by 0.2 MA currents were found to have source sizes of 1 micron, temperatures >1 keV, lifetimes of 10-100 ps, and densities >0.1 times solid density. These conditions are believed to result from the direct magnetic compression of matter. Physical models that capture the behavior of 0.2 MA X pinches predict more extreme parameters at currents >1 MA. This project developed load designs for up to 6 MA on the SATURN facility and attempted to measure the resulting plasma parameters. Source sizes of 5-8 microns were observed in some cases along with evidence for high temperatures (several keV) and short time durations (<500 ps).

More Details

Scientific data analysis on data-parallel platforms

Roe, Diana C.; Choe, Yung R.; Ulmer, Craig D.

As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

More Details

Using reconfigurable functional units in conventional microprocessors

Rodrigues, Arun

Scientific applications use highly specialized data structures that require complex, latency sensitive graphs of integer instructions for memory address calculations. Working with the Univeristy of Wisconsin, we have demonstrated significant differences between the Sandia's applications and the industry standard SPEC-FP (standard performance evaluation corporation-floating point) suite. Specifically, integer dataflow performance is critical to overall system performance. To improve this performance, we have developed a configurable functional unit design that is capable of accelerating integer dataflow.

More Details

Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion

Frank, Jonathan H.; Lawson, Matthew L.; Sargsyan, Khachik S.; Debusschere, Bert D.; Najm, H.N.

Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.

More Details

Accelerated Cartesian expansion (ACE) based framework for the rapid evaluation of diffusion, lossy wave, and Klein-Gordon potentials

Journal of Computational Physics

Baczewski, Andrew D.; Vikram, Melapudi; Shanker, Balasubramaniam; Kempel, Leo

Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(Ns2Nt2), where Ns and Nt are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The first scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(NsNtlog2Nt). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.

More Details

Reducing variance in batch partitioning measurements

Mariner, Paul M.

The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

More Details

A resilience assessment framework for infrastructure and economic systems: Quantitative and qualitative resilience analysis of petrochemical supply chains to a hurricane

AIChE Annual Meeting, Conference Proceedings

Vugrin, Eric D.; Warren, Drake E.; Ehlen, Mark E.

In recent years, the nation has recognized that critical infrastructure protection should consider not only the prevention of disruptive events, but also the processes that infrastructure systems undergo to maintain functionality following disruptions. This more comprehensive approach has been termed critical infrastructure resilience. Given the occurrence of a particular disruptive event, the resilience of a system to that event is the system's ability to efficiently reduce both the magnitude and duration of the deviation from targeted system performance levels. Under the direction of the U. S. Department of Homeland Security's Science and Technology Directorate, Sandia National Laboratories has developed a comprehensive resilience assessment framework for evaluating the resilience of infrastructure and economic systems. The framework includes a quantitative methodology that measures resilience costs that result from a disruption to infrastructure function. The framework also includes a qualitative analysis methodology that assesses system characteristics affecting resilience to provide insight and direction for potential improvements. This paper describes the resilience assessment framework and demonstrates the utility of the assessment framework through application to two hypothetical scenarios involving the disruption of a petrochemical supply chain by hurricanes.

More Details

Impacts to the ethylene supply chain from a hurricane disruption

AIChE Annual Meeting, Conference Proceedings

Downes, Paula S.; Welk, Margaret; Sun, Amy C.; Heinen, Russell

Analysis of chemical supply chains is an inherently complex task, given the dependence of these supply chains on multiple infrastructure systems (e.g. transportation and energy). This effort requires data and information at various levels of resolution, ranging from network-level distribution systems to individual chemical reactions. The U.S. Department of Homeland Security (DHS) has tasked the National Infrastructure Simulation and Analysis Center (NISAC) with developing a chemical infrastructure analytical capability to assess interdependencies and complexities of the nation's critical infrastructure, including the chemical sector. To address this need, the Sandia National Laboratories (Sandia)1 component of NISAC has integrated its existing simulation and infrastructure analysis capabilities with various chemical industry datasets to create a capability to analyze and estimate the supply chain and economic impacts resulting from large-scale disruptions to the chemical sector. This development effort is ongoing and is currently being funded by the DHS's Science and Technology Directorate. This paper describes the methodology being used to create the capability and the types of data necessary to exercise the capability, and it presents an example analysis focusing on the ethylene portion of the chemical supply chain.

More Details

Modeling the national chlorinated hydrocarbon supply chain and effects of disruption

AIChE Annual Meeting, Conference Proceedings

Welk, Margaret E.; Sun, Amy C.; Downes, Paula S.

Chlorinated hydrocarbons represent the precursors for products ranging from polyvinyl chloride (PVC) and refrigerants to pharmaceuticals. Natural or manmade disruptions that affect the availability of these products nationally have the potential to affect a wide range of markets, from healthcare to construction. Analysis of chemical supply chains is an inherently complex task, given the dependence of these supply chains on multiple infrastructure systems (e.g. transportation and energy). This effort requires data and information at various levels of resolution, ranging from network-level distribution systems to individual chemical reactions. The U.S. Department of Homeland Security (DHS) has tasked the National Infrastructure Simulation and Analysis Center (NISAC) with developing a chemical infrastructure analytical capability to assess interdependencies and complexities of the nation's critical infrastructure, including the chemical sector. To address this need, the Sandia National Laboratories (Sandia) component of NISAC has integrated its existing simulation and infrastructure analysis capabilities with various chemical industry datasets to create a capability to analyze and estimate the supply chain economic impacts resulting from large-scale disruptions to the chemical sector. This development effort is ongoing and is currently being funded by the DHS's Science and Technology Directorate. This paper describes the methodology being used to create the capability and the types of data necessary to exercise the capability, and it presents an example analysis focusing on the chlorinated hydrocarbon portion of the chemical supply chain.

More Details

Process characterization vehicles for 3D integration

Proceedings - Electronic Components and Technology Conference

Campbell, David V.

Assemblies produced by 3D Integration, whether fabricated at die or wafer level, involve a large number of post fab processing steps. Performing the prove-in of these operations on high value product has many limitations. This work uses simple surrogate process characterization vehicles, which workaround limitations of cost, timeliness of piecparts, ability to consider multiple processing options, and insufficient volumes for adequately exercising flows to collect specific process data for characterization. The test structures easily adapt to specific product in terms of die dimensions, aspect ratios, pitch and number of interconnects, and etc. This results in good fidelity in exercising product-specific processing. The discussed Cyclops vehicle implements a mirrored layout suitable for stacking to itself by wafer-to-wafer, die-to-wafer, or die-to-die. A standardized 2x10 pad test interface allows characterization of any of the integration methods with a single simple setup. This design offers the utility of comparison study of the various methods all using the same basis.

More Details

Yield modeling of 3D integrated wafer scale assemblies

Proceedings - Electronic Components and Technology Conference

Campbell, David V.

3D Integration approaches exist for wafer-to-wafer, die-towafer, and die-to-die assembly, each with distinct merits. Creation of "seamless" wafer scale focal plane arrays on the order of 6-8" in diameter drives very demanding yield requirements and understanding. This work established a Monte Carlo model of our exploratory architecture in order to assess the trades of the various assembly methods. The model results suggested an optimum die size, number of die stacks per assembly, number of layers per stack, and quantified the value of sorting for optimizing the assembly process.

More Details

Contribution of optical phonons to thermal boundary conductance

Applied Physics Letters

Beechem, Thomas; Duda, John C.; Hopkins, Patrick E.; Norris, Pamela M.

Thermal boundary conductance (TBC) is a performance determinant for many microsystems due to the numerous interfaces contained within their structure. To assess this transport, theoretical approaches often account for only the acoustic phonons as optical modes are assumed to contribute negligibly due to their low group velocities. To examine this approach, the diffuse mismatch model is reformulated to account for more realistic dispersions containing optical modes. Using this reformulation, it is found that optical phonons contribute to TBC by as much as 80% for a variety of material combinations in the limit of both inelastic and elastic scattering. © 2010 American Institute of Physics.

More Details

Evolution of Sandia's Risk Assessment Methodology for Water and Wastewater Utilities (RAM-W™)

World Environmental and Water Resources Congress 2010: Challenges of Change - Proceedings of the World Environmental and Water Resources Congress 2010

Jaeger, Calvin D.; Hightower, Marion M.; Torres, Teresa M.

The initial version of RAM-W was issued in November 2001. The Public Health Security and Bioterrorism Preparedness and Response Act was issued in 2002 and in October 2002, version 2 of RAM-W was distributed to the water sector. In August 2007, RAM-W was revised to be compliant with specific RAMCAP® (Risk Analysis and Management for Critical Asset Protection) requirements. In addition, this version of RAM-W incorporated a number of other changes and improvements to the RAM process. All of these RAM-W versions were manual, paper-based methods which allowed an analyst to estimate security risk for their specific utility. In September 2008, an automated RAM prototype tool was developed which provided the basic RAM framework for critical infrastructures. In 2009, water sector stakeholders identified a need to automate RAM-W and this development effort was started in January 2009. This presentation will discuss the evolution of the RAM-W approach, capabilities and the new automated RAM-W tool (ARAM-W which will be available in mid-2010). © 2010 ASCE.

More Details

Representation of analysis results involving aleatory and epistemic uncertainty

International Journal of General Systems

Sallaberry, Cedric J.

Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behaviour of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary CDFs (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (e.g. interval analysis, possibility theory, evidence theory or probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterisations of epistemic uncertainty.

More Details

Incorporating uncertainty into probabilistic performance models of concentrating solar power plants

Journal of Solar Energy Engineering, Transactions of the ASME

Ho, Clifford K.; Kolb, Gregory J.

A method for applying probabilistic models to concentrating solar-thermal power plants is described in this paper. The benefits of using probabilistic models include quantification of uncertainties inherent in the system and characterization of their impact on system performance and economics. Sensitivity studies using stepwise regression analysis can identify and rank the most important parameters and processes as a means to prioritize future research and activities. The probabilistic method begins with the identification of uncertain variables and the assignment of appropriate distributions for those variables. Those parameters are then sampled using a stratified method (Latin hypercube sampling) to ensure complete and representative sampling from each distribution. Models of performance, reliability, and cost are then simulated multiple times using the sampled set of parameters. The results yield a cumulative distribution function that can be used to quantify the probability of exceeding (or being less than) a particular value. Two examples, a simple cost model and a more detailed performance model of a hypothetical 100-MW e power tower, are provided to illustrate the methods. Copyright © 2010 by ASME.

More Details

Processing effects on microstructure in Er and ErD2 thin-films

Journal of Nuclear Materials

Parish, Chad M.; Snow, Clark S.; Kammler, Daniel K.; Brewer, Luke N.

Erbium metal thin-films have been deposited on molybdenum-on-silicon substrates and then converted to erbium dideuteride (ErD2). Here, we study the effects of deposition temperature (≈300 or 723 K) and deposition rate (1 or 20 nm/s) upon the initial Er metal microstructure and subsequent ErD2 microstructure. We find that low deposition temperature and low deposition rate lead to small Er metal grain sizes, and high deposition temperature and deposition rate led to larger Er metal grain sizes, consistent with published models of metal thin-film growth. ErD2 grain sizes are strongly influenced by the prior-metal grain size, with small metal grains leading to large ErD2 grains. A novel sample preparation technique for electron backscatter diffraction of air-sensitive ErD2 was developed, and allowed the quantitative measurement of ErD2 grain size and crystallographic texture. Finer-grained ErD2 showed a strong (1 1 1) fiber texture, whereas larger grained ErD2 had only weak texture. We hypothesize that this inverse correlation may arise from improved hydrogen diffusion kinetics in the more defective fine-grained metal structure or due to improved nucleation in the textured large-grain Er. © 2010 Elsevier B.V. All rights reserved.

More Details

Ethanol autoignition characteristics and HCCI performance for wide ranges of engine speed, load and boost

SAE International Journal of Engines

Sjoberg, Carl M.; Dec, John E.

The characteristics of ethanol autoignition and the associated HCCI performance are examined in this work. The experiments were conducted over wide ranges of engine speed, load and intake boost pressure (P in) in a single- cylinder HCCI research engine (0.98 liters) with a CR = 14 piston. The data show that pure ethanol is a true single-stage ignition fuel. It does not exhibit low-temperature heat release (LTHR), not even for boosted operation. This makes ethanol uniquely different from conventional distillate fuels and offers several benefits: a) The intake temperature (T in) does not have to be adjusted much with changes of engine speed, load and intake boost pressure. b) High P in can be tolerated without running out of control authority because of an excessively low T in requirement. However, by maintaining true single-stage ignition characteristics, ethanol also shows a relatively low temperature-rise rate just prior to its hot ignition point. Therefore, ethanol does not tolerate as much combustion-phasing retard as fuels that exhibit LTHR and/or pronounced intermediate-temperature heat release. Since combustion retard is important for avoiding excessive pressure-rise rates, the distinct single-stage ignition characteristic of ethanol can be considered a drawback when reaching for higher loads. Nonetheless, an IMEP g of 11.3 bar was demonstrated for P in = 247 kPa. Finally, the latest ethanol chemical-kinetics mechanism from the National University of Ireland - Galway was evaluated against the experimental engine data using a multi-zone model. Overall, the mechanism performs very well over wide ranges of operating conditions. © 2010 SAE International.

More Details

A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data

De Sapio, Vincent D.

The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

More Details

Environmental geographic information system

Peek, Dennis W.; Helfrich, Donald A.; Gorman, Susan

This document describes how the Environmental Geographic Information System (EGIS) was used, along with externally received data, to create maps for the Site-Wide Environmental Impact Statement (SWEIS) Source Document project. Data quality among the various classes of geographic information system (GIS) data is addressed. A complete listing of map layers used is provided.

More Details

Environmental management system

Salinas, Stephanie A.

The purpose of the Sandia National Laboratories/New Mexico (SNL/NM) Environmental Management System (EMS) is identification of environmental consequences from SNL/NM activities, products, and/or services to develop objectives and measurable targets for mitigation of any potential impacts to the environment. This Source Document discusses the annual EMS process for analysis of environmental aspects and impacts and also provides the fiscal year (FY) 2010 analysis. Further information on the EMS structure, processes, and procedures are described within the programmatic EMS Manual (PG470222).

More Details

Health and safety

Avery, Rosemary P.; Johns, William H.

This document provides information on the possible human exposure to environmental media potentially contaminated with radiological materials and chemical constituents from operations at Sandia National Laboratories/New Mexico (SNL/NM). This report is based on the best available information for Calendar Year (CY) 2008, and was prepared in support of future analyses, including those that may be performed as part of the SNL/NM Site-Wide Environmental Impact Statement.

More Details

Long-term environmental stewardship

Nagy, Michael D.

The purpose of this Supplemental Information Source Document is to effectively describe Long-Term Environmental Stewardship (LTES) at Sandia National Laboratories/New Mexico (SNL/NM). More specifically, this document describes the LTES and Long-Term Stewardship (LTS) Programs, distinguishes between the LTES and LTS Programs, and summarizes the current status of the Environmental Restoration (ER) Project.

More Details

Sustaining knowledge in the neutron generator community and benchmarking study. Phase II

Huff, Tameka B.; Baldonado, Esther B.

This report documents the second phase of work under the Sustainable Knowledge Management (SKM) project for the Neutron Generator organization at Sandia National Laboratories. Previous work under this project is documented in SAND2008-1777, Sustaining Knowledge in the Neutron Generator Community and Benchmarking Study. Knowledge management (KM) systems are necessary to preserve critical knowledge within organizations. A successful KM program should focus on people and the process for sharing, capturing, and applying knowledge. The Neutron Generator organization is developing KM systems to ensure knowledge is not lost. A benchmarking study involving site visits to outside industry plus additional resource research was conducted during this phase of the SKM project. The findings presented in this report are recommendations for making an SKM program successful. The recommendations are activities that promote sharing, capturing, and applying knowledge. The benchmarking effort, including the site visits to Toyota and Halliburton, provided valuable information on how the SEA KM team could incorporate a KM solution for not just the neutron generators (NG) community but the entire laboratory. The laboratory needs a KM program that allows members of the workforce to access, share, analyze, manage, and apply knowledge. KM activities, such as communities of practice (COP) and sharing best practices, provide a solution towards creating an enabling environment for KM. As more and more people leave organizations through retirement and job transfer, the need to preserve knowledge is essential. Creating an environment for the effective use of knowledge is vital to achieving the laboratory's mission.

More Details

Adapting ORAP to wind plants : industry value and functional requirements

Strategic Power Systems (SPS) was contracted by Sandia National Laboratories to assess the feasibility of adapting their ORAP (Operational Reliability Analysis Program) tool for deployment to the wind industry. ORAP for Wind is proposed for use as the primary data source for the CREW (Continuous Reliability Enhancement for Wind) database which will be maintained by Sandia to enable reliability analysis of US wind fleet operations. The report primarily addresses the functional requirements of the wind-based system. The SPS ORAP reliability monitoring system has been used successfully for over twenty years to collect RAM (Reliability, Availability, Maintainability) and operations data for benchmarking and analysis of gas and steam turbine performance. This report documents the requirements to adapt the ORAP system for the wind industry. It specifies which existing ORAP design features should be retained, as well as key new requirements for wind. The latter includes alignment with existing and emerging wind industry standards (IEEE 762, ISO 3977 and IEC 61400). There is also a comprehensive list of thirty critical-to-quality (CTQ) functional requirements which must be considered and addressed to establish the optimum design for wind.

More Details

Supplemental information source document : socioeconomics

Sedore, Lora J.

This document provides information on expenditures and staffing levels at Sandia National Laboratories/New Mexico (SNL/NM). This report is based on the best available information obtained from Sandia Corporation for Fiscal Years 2008 and 2009, and was prepared in support of future analyses, including those that may be performed as part of the SNL/NM Site-Wide Environmental Impact Statement.

More Details

A modal approach to modeling spatially distributed vibration energy dissipation

Segalman, Daniel J.

The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

More Details

Control system devices : architectures and supply channels overview

Schwartz, Moses D.; Mulder, John M.; Trent, Jason T.; Atkins, William D.

This report describes a research project to examine the hardware used in automated control systems like those that control the electric grid. This report provides an overview of the vendors, architectures, and supply channels for a number of control system devices. The research itself represents an attempt to probe more deeply into the area of programmable logic controllers (PLCs) - the specialized digital computers that control individual processes within supervisory control and data acquisition (SCADA) systems. The report (1) provides an overview of control system networks and PLC architecture, (2) furnishes profiles for the top eight vendors in the PLC industry, (3) discusses the communications protocols used in different industries, and (4) analyzes the hardware used in several PLC devices. As part of the project, several PLCs were disassembled to identify constituent components. That information will direct the next step of the research, which will greatly increase our understanding of PLC security in both the hardware and software areas. Such an understanding is vital for discerning the potential national security impact of security flaws in these devices, as well as for developing proactive countermeasures.

More Details

The first steps towards a standardized methodology for CSP electricity yield analysis

Ho, Clifford K.

The authors have founded a temporary international core team to prepare a SolarPACES activity aimed at the standardization of a methodology for electricity yield analysis of CSP plants. This core team has drafted a structural framework for a standardized methodology and the standardization process itself. The structural framework has to assure that the standardized methodology is applicable to all conceivable CSP systems, can be used on all levels of the project development process and covers all aspects affecting the electricity yield of CSP plants. Since the development of the standardized methodology is a complex task, the standardization process has been structured in work packages, and numerous international experts covering all aspects of CSP yield analysis have been asked to contribute to this process. These experts have teamed up in an international working group with the objective to develop, document and publish standardized methodologies for CSP yield analysis. This paper summarizes the intended standardization process and presents the structural framework of the methodology for CSP yield analysis.

More Details

Uranium for hydrogen storage applications : a materials science perspective

Kolasinski, Robert K.; Shugard, Andrew D.; Tewell, Craig R.; Cowgill, D.F.

Under appropriate conditions, uranium will form a hydride phase when exposed to molecular hydrogen. This makes it quite valuable for a variety of applications within the nuclear industry, particularly as a storage medium for tritium. However, some aspects of the U+H system have been characterized much less extensively than other common metal hydrides (particularly Pd+H), likely due to radiological concerns associated with handling. To assess the present understanding, we review the existing literature database for the uranium hydride system in this report and identify gaps in the existing knowledge. Four major areas are emphasized: {sup 3}He release from uranium tritides, the effects of surface contamination on H uptake, the kinetics of the hydride phase formation, and the thermal desorption properties. Our review of these areas is then used to outline potential avenues of future research.

More Details

Living off-grid in an arid environment without a well : can residential and commercial/industrial water harvesting help solve water supply problems?

Axness, Carl L.

Our family of three lives comfortably off-grid without a well in an arid region ({approx}9 in/yr, average). This year we expect to achieve water sustainability with harvested or grey water supporting all of our needs (including a garden and trees), except drinking water (about 7 gallons/week). We discuss our implementation and the implication that for an investment of a few thousand dollars, many single family homes could supply a large portion of their own water needs, significantly reducing municipal water demand. Generally, harvested water is very low in minerals and pollutants, but may need treatment for microbes in order to be potable. This may be addressed via filters, UV light irradiation or through chemical treatment (bleach). Looking further into the possibility of commercial water harvesting from malls, big box stores and factories, we ask whether water harvesting could supply a significant portion of potable water by looking at two cities with water supply problems. We look at the implications of separate municipal water lines for potable and clean non-potable uses. Implications on changes to future building codes are explored.

More Details

Why Models Don%3CU%2B2019%3Et Forecast

McNamara, Laura A.

The title of this paper, Why Models Don't Forecast, has a deceptively simple answer: models don't forecast because people forecast. Yet this statement has significant implications for computational social modeling and simulation in national security decision making. Specifically, it points to the need for robust approaches to the problem of how people and organizations develop, deploy, and use computational modeling and simulation technologies. In the next twenty or so pages, I argue that the challenge of evaluating computational social modeling and simulation technologies extends far beyond verification and validation, and should include the relationship between a simulation technology and the people and organizations using it. This challenge of evaluation is not just one of usability and usefulness for technologies, but extends to the assessment of how new modeling and simulation technologies shape human and organizational judgment. The robust and systematic evaluation of organizational decision making processes, and the role of computational modeling and simulation technologies therein, is a critical problem for the organizations who promote, fund, develop, and seek to use computational social science tools, methods, and techniques in high-consequence decision making.

More Details

Plasma-materials interaction results at Sandia National Laboratories

Kolasinski, Robert K.; Buchenauer, D.A.; Cowgill, D.F.; Karnesky, Richard A.; Whaley, Josh A.; Wampler, William R.

Overview of Plasma Materials Interaction (PMI) activities are: (1) Hydrogen diffusion and trapping in metals - (a) Growth of hydrogen precipitates in tungsten PFCs, (b) Temperature dependence of deuterium retention at displacement damage, (c) D retention in W at elevated temperatures; (2) Permeation - (a) Gas driven permeation results for W/Mo/SiC, (b) Plasma-driven permeation test stand for TPE; and (3) Surface studies - (a) H-sensor development, (b) Adsorption of oxygen and hydrogen on beryllium surfaces.

More Details

Antarctica X-band MiniSAR Crevasse Detection Radar : draft final report

Bickel, Douglas L.; Sander, Grant J.

This document is the final report for the 2009 Antarctica Crevasse Detection Radar (CDR) Project. This portion of the project is referred to internally as Phase 2. This is a follow on to the work done in Phase 1 reported on in [1]. Phase 2 involved the modification of a Sandia National Laboratories MiniSAR system used in Phase 1 to work with an LC-130 aircraft that operated in Antarctica in October through November of 2009. Experiments from the 2006 flights were repeated, as well as a couple new flight tests to examine the effect of colder snow and ice on the radar signatures of 'deep field' sites. This document includes discussion of the hardware development, system capabilities, and results from data collections in Antarctica during the fall of 2009.

More Details

An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement

Owen, Steven J.

Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

More Details

Field-structured chemiresistors : tunable sensors for chemical-switch arrays

Read, Douglas R.

We have developed a significantly improved composite material for applications to chemiresistors, which are resistance-based sensors for volatile organic compounds. This material is a polymer composite containing Au-coated magnetic particles organized into electrically conducting pathways by magnetic fields. This improved material overcomes the various problems inherent to conventional carbon-black chemiresistors, while achieving an unprecedented magnitude of response. When exposed to chemical vapors, the polymer swells only slightly, yet this is amplified into large, reversible resistance changes - as much as 9 decades at a swelling of only 1.5 %. These conductor-insulator transitions occur over such a narrow range of analyte vapor concentration that these devices can be described as chemical switches. We demonstrate that the sensitivity and response range of these sensors can be tailored over a wide range by controlling the stress within the composite, including through the application of a magnetic field. Such tailorable sensors can be used to create sensor arrays that can accurately determine analyte concentration over a broad concentration range, or can be used to create logic circuits that signal a particular chemical environment. It is shown through combined mass-sorption and conductance measurements, that the response curve of any individual sensor is a function of polymer swelling alone. This has the important implication that individual sensor calibration requires testing with only a single analyte. In addition, we demonstrate a method for analyte discrimination based on sensor response kinetics, which is independent of analyte concentration. This method allows for discrimination even between chemically similar analytes. Lastly, additional variables associated with the composite and their effects on sensor response are explored.

More Details

A comparison of mesh morphing methods for shape optimization

Owen, Steven J.; Staten, Matthew L.

The ability to automatically morph an existing mesh to conform to geometry modifications is a necessary capability to enable rapid prototyping of design variations. This paper compares six methods for morphing hexahedral and tetrahedral meshes, including the previously published FEMWARP and LBWARP methods as well as four new methods. Element quality and performance results show that different methods are superior on different models. We recommend that designers of applications that use mesh morphing consider both the FEMWARP and a linear simplex based method.

More Details

Evaluation of annual performance of 2-tank and thermocline thermal storage for trough plants

Kolb, Gregory J.

A study was performed to compare the annual performance of 50 MW{sub e} Andasol-like trough plants that employ either a 2-tank or a thermocline-type molten-salt thermal storage system. trnsys software was used to create the plant models and to perform the annual simulations. The annual performance of each plant was found to be nearly identical in the base-case comparison. The reason that the thermocline exhibited nearly the same performance is primarily due to the ability of many trough power blocks to operate at a temperature that is significantly below the design point. However, if temperatures close to the design point are required, the performance of the 2-tank plant would be significantly better than the thermocline.

More Details

AIMFAST : an alignment tool based on fringe reflection methods applied to dish concentrators

Yellowhair, Julius; Carlson, Jeffrey J.; Trapeznikov, Kirill T.

The proper alignment of facets on a dish engine concentrated solar power system is critical to the performance of the system. These systems are generally highly concentrating to produce high temperatures for maximum thermal efficiency so there is little tolerance for poor optical alignment. Improper alignment can lead to poor performance and shortened life through excessively high flux on the receiver surfaces, imbalanced power on multicylinder engines, and intercept losses at the aperture. Alignment approaches used in the past are time consuming field operations, typically taking 4-6 h per dish with 40-80 facets on the dish. Production systems of faceted dishes will need rapid, accurate alignment implemented in a fraction of an hour. In this paper, we present an extension to our Sandia Optical Fringe Analysis Slope Technique mirror characterization system that will automatically acquire data, implement an alignment strategy, and provide real-time mirror angle corrections to actuators or labor beneath the dish. The Alignment Implementation for Manufacturing using Fringe Analysis Slope Technique (AIMFAST) has been implemented and tested at the prototype level. In this paper we present the approach used in AIMFAST to rapidly characterize the dish system and provide near-real-time adjustment updates for each facet. The implemented approach can provide adjustment updates every 5 s, suitable for manual or automated adjustment of facets on a dish assembly line.

More Details
Results 70201–70400 of 96,771
Results 70201–70400 of 96,771