Publications

Results 67201–67400 of 99,299

Search results

Jump to search filters

Sodium fast reactor fuels and materials : research needs

Denman, Matthew R.; Porter, Douglas; Wright, Art; Lambert, John; Hayes, Steven; Natesan, Ken; Ott, Larry J.; Garner, Frank; Walters, Leon; Yacout, Abdellatif

An expert panel was assembled to identify gaps in fuels and materials research prior to licensing sodium cooled fast reactor (SFR) design. The expert panel considered both metal and oxide fuels, various cladding and duct materials, structural materials, fuel performance codes, fabrication capability and records, and transient behavior of fuel types. A methodology was developed to rate the relative importance of phenomena and properties both as to importance to a regulatory body and the maturity of the technology base. The technology base for fuels and cladding was divided into three regimes: information of high maturity under conservative operating conditions, information of low maturity under more aggressive operating conditions, and future design expectations where meager data exist.

More Details

Transformative monitoring approaches for reprocessing

Cipiti, Benjamin B.

The future of reprocessing in the United States is strongly driven by plant economics. With increasing safeguards, security, and safety requirements, future plant monitoring systems must be able to demonstrate more efficient operations while improving the current state of the art. The goal of this work was to design and examine the incorporation of advanced plant monitoring technologies into safeguards systems with attention to the burden on the operator. The technologies examined include micro-fluidic sampling for more rapid analytical measurements and spectroscopy-based techniques for on-line process monitoring. The Separations and Safeguards Performance Model was used to design the layout and test the effect of adding these technologies to reprocessing. The results here show that both technologies fill key gaps in existing materials accountability that provide detection of diversion events that may not be detected in a timely manner in existing plants. The plant architecture and results under diversion scenarios are described. As a tangent to this work, both the AMUSE and SEPHIS solvent extraction codes were examined for integration in the model to improve the reality of diversion scenarios. The AMUSE integration was found to be the most successful and provided useful results. The SEPHIS integration is still a work in progress and may provide an alternative option.

More Details

Stainless steel corrosion by molten nitrates : analysis and lessons learned

Kruizenga, Alan M.

A secondary containment vessel, made of stainless 316, failed due to severe nitrate salt corrosion. Corrosion was in the form of pitting was observed during high temperature, chemical stability experiments. Optical microscopy, scanning electron microscopy and energy dispersive spectroscopy were all used to diagnose the cause of the failure. Failure was caused by potassium oxide that crept into the gap between the primary vessel (alumina) and the stainless steel vessel. Molten nitrate solar salt (89% KNO{sub 3}, 11% NaNO{sub 3} by weight) was used during chemical stability experiments, with an oxygen cover gas, at a salt temperature of 350-700 C. Nitrate salt was primarily contained in an alumina vessel; however salt crept into the gap between the alumina and 316 stainless steel. Corrosion occurred over a period of approximately 2000 hours, with the end result of full wall penetration through the stainless steel vessel; see Figures 1 and 2 for images of the corrosion damage to the vessel. Wall thickness was 0.0625 inches, which, based on previous data, should have been adequate to avoid corrosion-induced failure while in direct contact with salt temperature at 677 C (0.081-inch/year). Salt temperatures exceeding 650 C lasted for approximately 14 days. However, previous corrosion data was performed with air as the cover gas. High temperature combined with an oxygen cover gas obviously drove corrosion rates to a much higher value. Corrosion resulted in the form of uniform pitting. Based on SEM and EDS data, pits contained primarily potassium oxide and potassium chromate, reinforcing the link between oxides and severe corrosion. In addition to the pitting corrosion, a large blister formed on the side wall, which was mainly composed of potassium, chromium and oxygen. All data indicated that corrosion initiated internally and moved outward. There was no evidence of intergranular corrosion nor were there any indication of fast pathways along grain boundaries. Much of the pitting occurred near welds; however this was the hottest region in the chamber. Pitting was observed up to two inches above the weld, indicating independence from weld effects.

More Details

LDRD final report : chromophore-functionalized aligned carbon nanotube arrays

Krafcik, Karen; Yang, Chu-Yeu P.

The goal of this project was to expand upon previously demonstrated single carbon nanotube devices by preparing a more practical, multi-single-walled carbon nanotube (SWNT) device. As a late-start, proof-of-concept project, the work focused on the fabrication and testing of chromophore-functionalized aligned SWNT field effect transistors (SWNT-FET). Such devices have not yet been demonstrated. The advantages of fabricating aligned SWNT devices include increased device cross-section to improve sensitivity to light, elimination of increased electrical resistance at nanotube junctions in random mat devices, and the ability to model device responses. The project did not achieve the goal of fabricating and testing chromophore-modified SWNT arrays, but a new SWNT growth capability was established that will benefit future projects. Although the ultimate goal of fabricating and testing chromophore-modified SWNT arrays was not achieved, the work did lead to a new carbon nanotube growth capability at Sandia/CA. The synthesis of dense arrays of horizontally aligned SWNTs is a developing area of research with significant potential for new discoveries. In particular, the ability to prepare arrays of carbon nanotubes of specific electronic types (metallic or semiconducting) could yield new classes of nanoscale devices.

More Details

Solving the software protection problem with intrinsic personal physical unclonable functions

Nithyanand, Rishab; Sion, Radu

Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. The unclonability property comes from the accepted hardness of replicating the multitude of characteristics introduced during the manufacturing process. This makes PUFs useful for solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection in offline settings. We first argue that traditional (black-box) PUFs are not useful for protecting software in settings where communication with a vendor's server or third party network device is infeasible or impossible. Instead, we argue that Intrinsic PUFs are needed to solve the above mentioned problems because they are intrinsically involved in processing the information that is to be protected. Finally, we describe how sources of randomness in any computing device can be used for creating intrinsic-personal-PUFs (IP-PUF) and present experimental results in using standard off-the-shelf computers as IP-PUFs.

More Details

Deriving a model for influenza epidemics from historical data

Ray, Jaideep

In this report we describe how we create a model for influenza epidemics from historical data collected from both civilian and military societies. We derive the model when the population of the society is unknown but the size of the epidemic is known. Our interest lies in estimating a time-dependent infection rate to within a multiplicative constant. The model form fitted is chosen for its similarity to published models for HIV and plague, enabling application of Bayesian techniques to discriminate among infectious agents during an emerging epidemic. We have developed models for the progression of influenza in human populations. The model is framed as a integral, and predicts the number of people who exhibit symptoms and seek care over a given time-period. The start and end of the time period form the limits of integration. The disease progression model, in turn, contains parameterized models for the incubation period and a time-dependent infection rate. The incubation period model is obtained from literature, and the parameters of the infection rate are fitted from historical data including both military and civilian populations. The calibrated infection rate models display a marked difference in which the 1918 Spanish Influenza pandemic differed from the influenza seasons in the US between 2001-2008 and the progression of H1N1 in Catalunya, Spain. The data for the 1918 pandemic was obtained from military populations, while the rest are country-wide or province-wide data from the twenty-first century. We see that the initial growth of infection in all cases were about the same; however, military populations were able to control the epidemic much faster i.e., the decay of the infection-rate curve is much higher. It is not clear whether this was because of the much higher level of organization present in a military society or the seriousness with which the 1918 pandemic was addressed. Each outbreak to which the influenza model was fitted yields a separate set of parameter values. We suggest 'consensus' parameter values for military and civilian populations in the form of normal distributions so that they may be further used in other applications. Representing the parameter values as distributions, instead of point values, allows us to capture the uncertainty and scatter in the parameters. Quantifying the uncertainty allows us to use these models further in inverse problems, predictions under uncertainty and various other studies involving risk.

More Details

Computational thermal, chemical, fluid, and solid mechanics for geosystems management

Martinez, Mario J.; Red-Horse, John R.; Carnes, Brian R.; Mesh, Mikhail; Field, Richard V.; Davison, Scott M.; Yoon, Hongkyu; Bishop, Joseph E.; Newell, Pania; Notz, Patrick K.; Turner, D.Z.; Subia, Samuel R.; Hopkins, Polly L.; Moffat, Harry K.; Jove-Colon, Carlos F.; Dewers, Thomas; Klise, Katherine A.

This document summarizes research performed under the SNL LDRD entitled - Computational Mechanics for Geosystems Management to Support the Energy and Natural Resources Mission. The main accomplishment was development of a foundational SNL capability for computational thermal, chemical, fluid, and solid mechanics analysis of geosystems. The code was developed within the SNL Sierra software system. This report summarizes the capabilities of the simulation code and the supporting research and development conducted under this LDRD. The main goal of this project was the development of a foundational capability for coupled thermal, hydrological, mechanical, chemical (THMC) simulation of heterogeneous geosystems utilizing massively parallel processing. To solve these complex issues, this project integrated research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture. This report summarizes and demonstrates the capabilities that were developed together with the supporting research underlying the models. Key accomplishments are: (1) General capability for modeling nonisothermal, multiphase, multicomponent flow in heterogeneous porous geologic materials; (2) General capability to model multiphase reactive transport of species in heterogeneous porous media; (3) Constitutive models for describing real, general geomaterials under multiphase conditions utilizing laboratory data; (4) General capability to couple nonisothermal reactive flow with geomechanics (THMC); (5) Phase behavior thermodynamics for the CO2-H2O-NaCl system. General implementation enables modeling of other fluid mixtures. Adaptive look-up tables enable thermodynamic capability to other simulators; (6) Capability for statistical modeling of heterogeneity in geologic materials; and (7) Simulator utilizes unstructured grids on parallel processing computers.

More Details

EMPHASIS/Nevada UTDEM user guide. Version 2.0

Turner, C.D.; Pasik, Michael F.; Seidel, David B.

The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) is a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.

More Details

EMPHASIS/Nevada CABANA user Guide. Version 2.0

Turner, C.D.; Bohnhoff, William J.; Troup, Jennifer L.

The CABle ANAlysis (CABANA) portion of the EMPHASIS{trademark} suite is designed specifically for the simulation of cable system-generated electromagnetic pulse (SGEMP). The code can be used to evaluate the response of a specific cable design to threat or to compare and minimize the relative response of difference designs. This document provides user-specific information to facilitate the application of the code to cables of interest. It solves the electrical portion of a cable SGEMP simulation. It takes specific results from the deterministic radiation-transport code CEPTRE as sources and computes the resulting electrical response to an arbitrary cable load. The cable geometry itself is also arbitrary and is limited only by the patience of the user in meshing and by the available computing resources for the solution. The CABANA simulation involves solution of the quasi-static Maxwell equations using finite-element method (FEM) techniques.

More Details

Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids

Wagner, Gregory J.; Deng, Jie; Erickson, Lindsay; Plimpton, Steven J.; Thompson, A.P.; Zhou, Xiaowang; Zimmerman, Jonathan A.

Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.

More Details

A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem

Nithyanand, Rishab; Solis, John H.

Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributions are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.

More Details

A design for a V&V and UQ discovery process

Knupp, Patrick K.; Urbina, Angel U.

There is currently sparse literature on how to implement systematic and comprehensive processes for modern V&V/UQ (VU) within large computational simulation projects. Important design requirements have been identified in order to construct a viable 'system' of processes. Significant processes that are needed include discovery, accumulation, and assessment. A preliminary design is presented for a VU Discovery process that accounts for an important subset of the requirements. The design uses a hierarchical approach to set context and a series of place-holders that identify the evidence and artifacts that need to be created in order to tell the VU story and to perform assessments. The hierarchy incorporates VU elements from a Predictive Capability Maturity Model and uses questionnaires to define critical issues in VU. The place-holders organize VU data within a central repository that serves as the official VU record of the project. A review process ensures that those who will contribute to the record have agreed to provide the evidence identified by the Discovery process. VU expertise is an essential part of this process and ensures that the roadmap provided by the Discovery process is adequate. Both the requirements and the design were developed to support the Nuclear Energy Advanced Modeling and Simulation Waste project, which is developing a set of advanced codes for simulating the performance of nuclear waste storage sites. The Waste project served as an example to keep the design of the VU Discovery process grounded in practicalities. However, the system is represented abstractly so that it can be applied to other M&S projects.

More Details

A toolbox for a class of discontinuous Petrov-Galerkin methods using trilinos

Ridzal, Denis; Bochev, Pavel B.

The class of discontinuous Petrov-Galerkin finite element methods (DPG) proposed by L. Demkowicz and J. Gopalakrishnan guarantees the optimality of the solution in an energy norm and produces a symmetric positive definite stiffness matrix, among other desirable properties. In this paper, we describe a toolbox, implemented atop Sandia's Trilinos library, for rapid development of solvers for DPG methods. We use this toolbox to develop solvers for the Poisson and Stokes problems.

More Details

Analysis of sheltering and evacuation strategies for a Chicago nuclear detonation scenario

Brandt, Larry D.; Yoshimura, Ann S.

Development of an effective strategy for shelter and evacuation is among the most important planning tasks in preparation for response to a low yield, nuclear detonation in an urban area. Extensive studies have been performed and guidance published that highlight the key principles for saving lives following such an event. However, region-specific data are important in the planning process as well. This study examines some of the unique regional factors that impact planning for a 10 kt detonation in Chicago. The work utilizes a single scenario to examine regional impacts as well as the shelter-evacuate decision alternatives at selected exemplary points. For many Chicago neighborhoods, the excellent assessed shelter quality available make shelter-in-place or selective transit to a nearby shelter a compelling post-detonation strategy.

More Details

Earthquake warning system for infrastructures : a scoping analysis

Kelic, Andjelka; Stamber, Kevin L.; Brodsky, Nancy S.; Vugrin, Eric; Corbet Jr., Thomas F.; O'Connor, Sharon L.

This report provides the results of a scoping study evaluating the potential risk reduction value of a hypothetical, earthquake early-warning system. The study was based on an analysis of the actions that could be taken to reduce risks to population and infrastructures, how much time would be required to take each action and the potential consequences of false alarms given the nature of the action. The results of the scoping analysis indicate that risks could be reduced through improving existing event notification systems and individual responses to the notification; and production and utilization of more detailed risk maps for local planning. Detailed maps and training programs, based on existing knowledge of geologic conditions and processes, would reduce uncertainty in the consequence portion of the risk analysis. Uncertainties in the timing, magnitude and location of earthquakes and the potential impacts of false alarms will present major challenges to the value of an early-warning system.

More Details

Surveillance metrics sensitivity study

Bierbaum, Rene L.

In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

More Details

Real-time characterization of partially observed epidemics using surrogate models

Safta, Cosmin; Ray, Jaideep; Sargsyan, Khachik; Lefantzi, Sophia

We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as observations from the 1918 influenza pandemic collected at Camp Custer, Michigan.

More Details

Ductile failure X-prize

Boyce, Brad L.; Foulk, James W.; Littlewood, David J.; Mota, Alejandro; Ostien, Jakob T.; Silling, Stewart; Spencer, Benjamin W.; Wellman, Gerald W.; Bishop, Joseph E.; Brown, Arthur; Cordova, Theresa E.; Cox, James; Crenshaw, Thomas B.; Dion, Kristin; Emery, John M.

Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictions had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.

More Details

Time Encoded Radiation Imaging

Marleau, P.; Brubaker, E.; Gerling, Mark; Schuster, Patricia F.; Steele, J.

Passive detection of special nuclear material (SNM) at long range or under heavy shielding can only be achieved by observing the penetrating neutral particles that it emits: gamma rays and neutrons in the MeV energy range. The ultimate SNM standoff detector system would have sensitivity to both gamma and neutron radiation, a large area and high efficiency to capture as many signal particles as possible, and good discrimination against background particles via directional and energy information. Designing such a system is a daunting task. Using timemodulated collimators could be a transformative technique leading to practical gamma-neutron imaging detector systems that are highly efficient with the potential to exhibit simultaneously high angular and energy resolution. A new technique using time encoding to make a compact, high efficiency imaging detector was conceived. Design considerations using Monte Carlo modeling and the construction and demonstration of a prototype imager are described.

More Details

Keeping checkpoint/restart viable for exascale systems

Ferreira, Kurt; Oldfield, Ron; Stearley, Jon S.; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systems due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.

More Details

Anomaly metrics to differentiate threat sources from benign sources in primary vehicle screening

Mengesha, Wondwosen

Discrimination of benign sources from threat sources at Port of Entries (POE) is of a great importance in efficient screening of cargo and vehicles using Radiation Portal Monitors (RPM). Currently RPM's ability to distinguish these radiological sources is seriously hampered by the energy resolution of the deployed RPMs. As naturally occurring radioactive materials (NORM) are ubiquitous in commerce, false alarms are problematic as they require additional resources in secondary inspection in addition to impacts on commerce. To increase the sensitivity of such detection systems without increasing false alarm rates, alarm metrics need to incorporate the ability to distinguish benign and threat sources. Principal component analysis (PCA) and clustering technique were implemented in the present study. Such techniques were investigated for their potential to lower false alarm rates and/or increase sensitivity to weaker threat sources without loss of specificity. Results of the investigation demonstrated improved sensitivity and specificity in discriminating benign sources from threat sources.

More Details

Bio-inspired nanocomposite assemblies as smart skin components

Frischknecht, Amalie L.; Edwards, Thayne L.; Achyuthan, Komandoor; Wheeler, David R.; Brozik, Susan M.

There is national interest in the development of sophisticated materials that can automatically detect and respond to chemical and biological threats without the need for human intervention. In living systems, cell membranes perform such functions on a routine basis, detecting threats, communicating with the cell, and triggering automatic responses such as the opening and closing of ion channels. The purpose of this project was to learn how to replicate simple threat detection and response functions within artificial membrane systems. The original goals toward developing 'smart skin' assemblies included: (1) synthesizing functionalized nanoparticles to produce electrochemically responsive systems within a lipid bilayer host matrices, (2) calculating the energetics of nanoparticle-lipid interactions and pore formation, and (3) determining the mechanism of insertion of nanoparticles in lipid bilayers via imaging and electrochemistry. There are a few reports of the use of programmable materials to open and close pores in rigid hosts such as mesoporous materials using either heat or light activation. However, none of these materials can regulate themselves in response to the detection of threats. The strategies we investigated in this project involve learning how to use programmable nanomaterials to automatically eliminate open channels within a lipid bilayer host when 'threats' are detected. We generated and characterized functionalized nanoparticles that can be used to create synthetic pores through the membrane and investigated methods of eliminating the pores either through electrochemistry, change in pH, etc. We also focused on characterizing the behavior of functionalized gold NPs in different lipid membranes and lipid vesicles and coupled these results to modeling efforts designed to gain an understanding of the interaction of nanoparticles within lipid assemblies.

More Details

Development and characterization of 3D, nano-confined multicellular constructs for advanced biohybrid devices

Kaehr, Bryan J.

This is the final report for the President Harry S. Truman Fellowship in National Security Science and Engineering (LDRD project 130813) awarded to Dr. Bryan Kaehr from 2008-2011. Biological chemistries, cells, and integrated systems (e.g., organisms, ecologies, etc.) offer important lessons for the design of synthetic strategies and materials. The desire to both understand and ultimately improve upon biological processes has been a driving force for considerable scientific efforts worldwide. However, to impart the useful properties of biological systems into modern devices and materials requires new ideas and technologies. The research herein addresses aspects of these issues through the development of (1) a rapid-prototyping methodology to build 3D bio-interfaces and catalytic architectures, (2) a quantitative method to measure cell/material mechanical interactions in situ and at the microscale, and (3) a breakthrough approach to generate functional biocomposites from bacteria and cultured cells.

More Details

Investigation of type-I interferon dysregulation by arenaviruses : a multidisciplinary approach

Branda, Catherine; James, Conrad D.; Kozina, Carol L.; Manginell, Ronald; Misra, Milind; Moorman, Matthew W.; Negrete, Oscar N.; Ricken, Bryce; Wu, Meiye

This report provides a detailed overview of the work performed for project number 130781, 'A Systems Biology Approach to Understanding Viral Hemorrhagic Fever Pathogenesis.' We report progress in five key areas: single cell isolation devices and control systems, fluorescent cytokine and transcription factor reporters, on-chip viral infection assays, molecular virology analysis of Arenavirus nucleoprotein structure-function, and development of computational tools to predict virus-host protein interactions. Although a great deal of work remains from that begun here, we have developed several novel single cell analysis tools and knowledge of Arenavirus biology that will facilitate and inform future publications and funding proposals.

More Details

Tracking topic birth and death in LDA

Wilson, Andrew T.; Robinson, David G.

Most topic modeling algorithms that address the evolution of documents over time use the same number of topics at all times. This obscures the common occurrence in the data where new subjects arise and old ones diminish or disappear entirely. We propose an algorithm to model the birth and death of topics within an LDA-like framework. The user selects an initial number of topics, after which new topics are created and retired without further supervision. Our approach also accommodates many of the acceleration and parallelization schemes developed in recent years for standard LDA. In recent years, topic modeling algorithms such as latent semantic analysis (LSA)[17], latent Dirichlet allocation (LDA)[10] and their descendants have offered a powerful way to explore and interrogate corpora far too large for any human to grasp without assistance. Using such algorithms we are able to search for similar documents, model and track the volume of topics over time, search for correlated topics or model them with a hierarchy. Most of these algorithms are intended for use with static corpora where the number of documents and the size of the vocabulary are known in advance. Moreover, almost all current topic modeling algorithms fix the number of topics as one of the input parameters and keep it fixed across the entire corpus. While this is appropriate for static corpora, it becomes a serious handicap when analyzing time-varying data sets where topics come and go as a matter of course. This is doubly true for online algorithms that may not have the option of revising earlier results in light of new data. To be sure, these algorithms will account for changing data one way or another, but without the ability to adapt to structural changes such as entirely new topics they may do so in counterintuitive ways.

More Details

Rapid hydrogen gas generation using reactive thermal decomposition of uranium hydride

Shugard, Andrew D.; Buffleben, George M.; James, Scott; Kanouff, Michael P.; Robinson, David; Mills, Bernice E.; Gharagozloo, Patricia E.; Van Blarigan, Peter

Oxygen gas injection has been studied as one method for rapidly generating hydrogen gas from a uranium hydride storage system. Small scale reactors, 2.9 g UH{sub 3}, were used to study the process experimentally. Complimentary numerical simulations were used to better characterize and understand the strongly coupled chemical and thermal transport processes controlling hydrogen gas liberation. The results indicate that UH{sub 3} and O{sub 2} are sufficiently reactive to enable a well designed system to release gram quantities of hydrogen in {approx} 2 seconds over a broad temperature range. The major system-design challenge appears to be heat management. In addition to the oxidation tests, H/D isotope exchange experiments were performed. The rate limiting step in the overall gas-to-particle exchange process was found to be hydrogen diffusion in the {approx}0.5 {mu}m hydride particles. The experiments generated a set of high quality experimental data; from which effective intra-particle diffusion coefficients can be inferred.

More Details

Enabling graphene nanoelectronics

Ohta, Taisuke; McCarty, Kevin F.; Beechem, Thomas E.; Pan, Wei; Biedermann, Laura B.; Ross III, Anthony J.; Gutierrez, Carlos

Recent work has shown that graphene, a 2D electronic material amenable to the planar semiconductor fabrication processing, possesses tunable electronic material properties potentially far superior to metals and other standard semiconductors. Despite its phenomenal electronic properties, focused research is still required to develop techniques for depositing and synthesizing graphene over large areas, thereby enabling the reproducible mass-fabrication of graphene-based devices. To address these issues, we combined an array of growth approaches and characterization resources to investigate several innovative and synergistic approaches for the synthesis of high quality graphene films on technologically relevant substrate (SiC and metals). Our work focused on developing the fundamental scientific understanding necessary to generate large-area graphene films that exhibit highly uniform electronic properties and record carrier mobility, as well as developing techniques to transfer graphene onto other substrates.

More Details
Results 67201–67400 of 99,299
Results 67201–67400 of 99,299