Publications

Results 66001–66200 of 96,771

Search results

Jump to search filters

Eddy sensors for small diameter stainless steel tubes

Morales, Alfredo M.; Andersen, Lisa E.; Skinner, J.L.; LaFord, Marianne L.; Korellis, Henry J.

The goal of this project was to develop non-destructive, minimally disruptive eddy sensors to inspect small diameter stainless steel metal tubes. Modifications to Sandia's Emphasis/EIGER code allowed for the modeling of eddy current bobbin sensors near or around 1/8-inch outer diameter stainless steel tubing. Modeling results indicated that an eddy sensor based on a single axial coil could effectively detect changes in the inner diameter of a stainless steel tubing. Based on the modeling results, sensor coils capable of detecting small changes in the inner diameter of a stainless steel tube were designed, built and tested. The observed sensor response agreed with the results of the modeling and with eddy sensor theory. A separate limited distribution SAND report is being issued demonstrating the application of this sensor.

More Details

A Model-Based Case for Redundant Computation

Stearley, Jon S.; Robinson, David G.; Ferreira, Kurt

Despite its seemingly nonsensical cost, we show through modeling and simulation that redundant computation merits full consideration as a resilience strategy for next-generation systems. Without revolutionary breakthroughs in failure rates, part counts, or stable-storage bandwidths, it has been shown that the utility of Exascale systems will be crushed by the overheads of traditional checkpoint/restart mechanisms. Alternate resilience strategies must be considered, and redundancy is a proven unrivaled approach in many domains. We develop a distribution-independent model for job interrupts on systems of arbitrary redundancy, adapt Daly’s model for total application runtime, and find that his estimate for optimal checkpoint interval remains valid for redundant systems. We then identify conditions where redundancy is more cost effective than non-redundancy. These are done in the context of the number one supercomputers of the last decade, showing that thorough consideration of redundant computation is timely - if not overdue.

More Details

OPSAID improvements and capabilities report

Chavez, Adrian R.; Halbgewachs, Ronald D.

Process Control System (PCS) and Industrial Control System (ICS) security is critical to our national security. But there are a number of technological, economic, and educational impediments to PCS owners implementing effective security on their systems. Sandia National Laboratories has performed the research and development of the OPSAID (Open PCS Security Architecture for Interoperable Design), a project sponsored by the US Department of Energy Office of Electricity Delivery and Energy Reliability (DOE/OE), to address this issue. OPSAID is an open-source architecture for PCS/ICS security that provides a design basis for vendors to build add-on security devices for legacy systems, while providing a path forward for the development of inherently-secure PCS elements in the future. Using standardized hardware, a proof-of-concept prototype system was also developed. This report describes the improvements and capabilities that have been added to OPSAID since an initial report was released. Testing and validation of this architecture has been conducted in another project, Lemnos Interoperable Security Project, sponsored by DOE/OE and managed by the National Energy Technology Laboratory (NETL).

More Details

Atom-to-continuum methods for gaining a fundamental understanding of fracture

Jones, Reese E.; Zimmerman, Jonathan A.; Templeton, Jeremy A.; Zhou, Xiaowang Z.; Moody, Neville R.; Reedy, Earl D.

This report describes an Engineering Sciences Research Foundation (ESRF) project to characterize and understand fracture processes via molecular dynamics modeling and atom-to-continuum methods. Under this aegis we developed new theory and a number of novel techniques to describe the fracture process at the atomic scale. These developments ranged from a material-frame connection between molecular dynamics and continuum mechanics to an atomic level J integral. Each of the developments build upon each other and culminated in a cohesive zone model derived from atomic information and verified at the continuum scale. This report describes an Engineering Sciences Research Foundation (ESRF) project to characterize and understand fracture processes via molecular dynamics modeling and atom-to-continuum methods. The effort is predicated on the idea that processes and information at the atomic level are missing in engineering scale simulations of fracture, and, moreover, are necessary for these simulations to be predictive. In this project we developed considerable new theory and a number of novel techniques in order to describe the fracture process at the atomic scale. Chapter 2 gives a detailed account of the material-frame connection between molecular dynamics and continuum mechanics we constructed in order to best use atomic information from solid systems. With this framework, in Chapter 3, we were able to make a direct and elegant extension of the classical J down to simulations on the scale of nanometers with a discrete atomic lattice. The technique was applied to cracks and dislocations with equal success and displayed high fidelity with expectations from continuum theory. Then, as a prelude to extension of the atomic J to finite temperatures, we explored the quasi-harmonic models as efficient and accurate surrogates of atomic lattices undergoing thermo-elastic processes (Chapter 4). With this in hand, in Chapter 5 we provide evidence that, by using the appropriate energy potential, the atomic J integral we developed is calculable and accurate at finite/room temperatures. In Chapter 6, we return in part to the fundamental efforts to connect material behavior at the atomic scale to that of the continuum. In this chapter, we devise theory that predicts the onset of instability characteristic of fracture/failure via atomic simulation. In Chapters 7 and 8, we describe the culmination of the project in connecting atomic information to continuum modeling. In these chapters we show that cohesive zone models are: (a) derivable from molecular dynamics in a robust and systematic way, and (b) when used in the more efficient continuum-level finite element technique provide results that are comparable and well-correlated with the behavior at the atomic-scale. Moreover, we show that use of these same cohesive zone elements is feasible at scales very much larger than that of the lattice. Finally, in Chapter 9 we describe our work in developing the efficient non-reflecting boundary conditions necessary to perform transient fracture and shock simulation with molecular dynamics.

More Details

Optimizing Tpetra%3CU%2B2019%3Es sparse matrix-matrix multiplication routine

Nusbaum, Kurtis L.

Over the course of the last year, a sparse matrix-matrix multiplication routine has been developed for the Tpetra package. This routine is based on the same algorithm that is used in EpetraExt with heavy modifications. Since it achieved a working state, several major optimizations have been made in an effort to speed up the routine. This report will discuss the optimizations made to the routine, its current state, and where future work needs to be done.

More Details

Generic repository design concepts and thermal analysis (FY11)

Hardin, Ernest H.

Reference concepts for geologic disposal of used nuclear fuel and high-level radioactive waste in the U.S. are developed, including geologic settings and engineered barriers. Repository thermal analysis is demonstrated for a range of waste types from projected future, advanced nuclear fuel cycles. The results show significant differences among geologic media considered (clay/shale, crystalline rock, salt), and also that waste package size and waste loading must be limited to meet targeted maximum temperature values. In this study, the UFD R&D Campaign has developed a set of reference geologic disposal concepts for a range of waste types that could potentially be generated in advanced nuclear FCs. A disposal concept consists of three components: waste inventory, geologic setting, and concept of operations. Mature repository concepts have been developed in other countries for disposal of spent LWR fuel and HLW from reprocessing UNF, and these serve as starting points for developing this set. Additional design details and EBS concepts will be considered as the reference disposal concepts evolve. The waste inventory considered in this study includes: (1) direct disposal of SNF from the LWR fleet, including Gen III+ advanced LWRs being developed through the Nuclear Power 2010 Program, operating in a once-through cycle; (2) waste generated from reprocessing of LWR UOX UNF to recover U and Pu, and subsequent direct disposal of used Pu-MOX fuel (also used in LWRs) in a modified-open cycle; and (3) waste generated by continuous recycling of metal fuel from fast reactors operating in a TRU burner configuration, with additional TRU material input supplied from reprocessing of LWR UOX fuel. The geologic setting provides the natural barriers, and establishes the boundary conditions for performance of engineered barriers. The composition and physical properties of the host medium dictate design and construction approaches, and determine hydrologic and thermal responses of the disposal system. Clay/shale, salt, and crystalline rock media are selected as the basis for reference mined geologic disposal concepts in this study, consistent with advanced international repository programs, and previous investigations in the U.S. The U.S. pursued deep geologic disposal programs in crystalline rock, shale, salt, and volcanic rock in the years leading up to the Nuclear Waste Policy Act, or NWPA (Rechard et al. 2011). The 1987 NWPA amendment act focused the U.S. program on unsaturated, volcanic rock at the Yucca Mountain site, culminating in the 2008 license application. Additional work on unsaturated, crystalline rock settings (e.g., volcanic tuff) is not required to support this generic study. Reference disposal concepts are selected for the media listed above and for deep borehole disposal, drawing from recent work in the U.S. and internationally. The main features of the repository concepts are discussed in Section 4.5 and summarized in Table ES-1. Temperature histories at the waste package surface and a specified distance into the host rock are calculated for combinations of waste types and reference disposal concepts, specifying waste package emplacement modes. Target maximum waste package surface temperatures are identified, enabling a sensitivity study to inform the tradeoff between the quantity of waste per disposal package, and decay storage duration, with respect to peak temperature at the waste package surface. For surface storage duration on the order of 100 years or less, waste package sizes for direct disposal of SNF are effectively limited to 4-PWR configurations (or equivalent size and output). Thermal results are summarized, along with recommendations for follow-on work including adding additional reference concepts, verification and uncertainty analysis for thermal calculations, developing descriptions of surface facilities and other system details, and cost estimation to support system-level evaluations.

More Details

Granite disposal of U.S. high-level radioactive waste

Mariner, Paul M.; Lee, Joon L.; Hardin, Ernest H.; Hansen, Francis D.; Freeze, Geoffrey A.; Lord, Anna S.; Goldstein, Barry G.

This report evaluates the feasibility of disposing U.S. high-level radioactive waste in granite several hundred meters below the surface of the earth. The U.S. has many granite formations with positive attributes for permanent disposal. Similar crystalline formations have been extensively studied by international programs, two of which, in Sweden and Finland, are the host rocks of submitted or imminent repository license applications. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in granite media. In this report we develop scoping performance analyses, based on the applicable features, events, and processes (FEPs) identified by international investigators, to support generic conclusions regarding post-closure safety. Unlike the safety analyses for disposal in salt, shale/clay, or deep boreholes, the safety analysis for a mined granite repository depends largely on waste package preservation. In crystalline rock, waste packages are preserved by the high mechanical stability of the excavations, the diffusive barrier of the buffer, and favorable chemical conditions. The buffer is preserved by low groundwater fluxes, favorable chemical conditions, backfill, and the rigid confines of the host rock. An added advantage of a mined granite repository is that waste packages would be fairly easy to retrieve, should retrievability be an important objective. The results of the safety analyses performed in this study are consistent with the results of comprehensive safety assessments performed for sites in Sweden, Finland, and Canada. They indicate that a granite repository would satisfy established safety criteria and suggest that a small number of FEPs would largely control the release and transport of radionuclides. In the event the U.S. decides to pursue a potential repository in granite, a detailed evaluation of these FEPs would be needed to inform site selection and safety assessment.

More Details

Material synthesis and hydrogen storage of palladium-rhodium alloy

Yang, Nancy Y.

Pd and Pd alloys are candidate material systems for Tr or H storage. We have actively engaged in material synthesis and studied the material science of hydrogen storage for Pd-Rh alloys. In collaboration with UC Davis, we successfully developed/optimized a supersonic gas atomization system, including its processing parameters, for Pd-Rh-based alloy powders. This optimized system and processing enable us to produce {le} 50-{mu}m powders with suitable metallurgical properties for H-storage R&D. In addition, we studied hydrogen absorption-desorption pressure-composition-temperature (PCT) behavior using these gas-atomized Pd-Rh alloy powders. The study shows that the pressure-composition-temperature (PCT) behavior of Pd-Rh alloys is strongly influenced by its metallurgy. The plateau pressure, slope, and H/metal capacity are highly dependent on alloy composition and its chemical distribution. For the gas-atomized Pd-10 wt% Rh, the absorption plateau pressure is relatively high and consistent. However, the absorption-desorption PCT exhibits a significant hysteresis loop that is not seen from the 30-nm nanopowders produced by chemical precipitation. In addition, we observed that the presence of hydrogen introduces strong lattice strain, plastic deformation, and dislocation networking that lead to material hardening, lattice distortions, and volume expansion. The above observations suggest that the H-induced dislocation networking is responsible for the hysteresis loop seen in the current atomized Pd-10 wt% Rh powders. This conclusion is consistent with the hypothesis suggested by Flanagan and others (Ref 1) that plastic deformation or dislocations control the hysteresis loop.

More Details

Packaging a liquid metal ESD with micro-scale Mercury droplet

Galambos, Paul

A liquid metal ESD is being developed to provide electrical switching at different acceleration levels. The metal will act as both proof mass and electric contact. Mercury is chosen to comply with operation parameters. There are many challenges surrounding the deposition and containment of micro scale mercury droplets. Novel methods of micro liquid transfer are developed to deliver controllable amounts of mercury to the appropriate channels in volumes under 1 uL. Issues of hermetic sealing and avoidance of mercury contamination are also addressed.

More Details

Kauai Island Utility Co-op (KIUC) PV integration study

Ellis, Abraham E.

This report investigates the effects that increased distributed photovoltaic (PV) generation would have on the Kauai Island Utility Co-op (KIUC) system operating requirements. The study focused on determining reserve requirements needed to mitigate the impact of PV variability on system frequency, and the impact on operating costs. Scenarios of 5-MW, 10-MW, and 15-MW nameplate capacity of PV generation plants distributed across the Kauai Island were considered in this study. The analysis required synthesis of the PV solar resource data and modeling of the KIUC system inertia. Based on the results, some findings and conclusions could be drawn, including that the selection of units identified as marginal resources that are used for load following will change; PV penetration will displace energy generated by existing conventional units, thus reducing overall fuel consumption; PV penetration at any deployment level is not likely to reduce system peak load; and increasing PV penetration has little effect on load-following reserves. The study was performed by EnerNex under contract from Sandia National Laboratories with cooperation from KIUC.

More Details

Fluctuating wall pressures measured beneath a supersonic turbulent boundary layer

Physics of Fluids

Beresh, Steven J.; Henfling, John F.; Spillers, Russell W.; Pruett, Brian O.

Wind tunnel experiments up to Mach 3 have provided fluctuating wall-pressure spectra beneath a supersonic turbulent boundary layer to frequencies reaching 400 kHz by combining data from piezoresistive silicon pressure transducers effective at low- and mid-range frequencies and piezoelectric quartz sensors to detect high frequency events. Data were corrected for spatial attenuation at high frequencies and for wind-tunnel noise and vibration at low frequencies. The resulting power spectra revealed the ω-1 dependence for fluctuations within the logarithmic region of the boundary layer but are essentially flat at low frequency and do not exhibit the theorized ω2 dependence. When normalized by outer flow variables, a slight dependence upon the Reynolds number is detected, but Mach number is the dominant parameter. Normalization by inner flow variables is largely successful for the ω-1 region but does not apply for lower frequencies. A comparison of the pressure fluctuation intensities with 50 years of historical data shows their reported magnitude chiefly is a function of the frequency response of the sensors. The present corrected data yield results in excess of the bulk of the historical data, but uncorrected data are consistent with lower magnitudes, suggesting that much of the historical compressible database may be biased low. © 2011 American Institute of Physics.

More Details

Drift-insensitive dim-target detection using differential correlation

Proceedings of SPIE - The International Society for Optical Engineering

Hsu, Alan Y.

We investigate a dim-target-detection approach for pixellated focal-plane-arrays based on differential correlation detection. The change in the temporal correlation of the output signals between an illuminated pixel and a dark reference pixel is measured in real time over some number of samples and may enable more sensitive detection of dim targets whose signal amplitudes are on the order of the noise levels of the sensor. If successful, target detection may be possible with target signal-to-noise-ratios of less than 1 under practical conditions where dark drift may occur. © 2011 SPIE.

More Details

Z-Backlighter facility upgrades: A path to short/long pulse, multi-frame, multi-color x-ray backlighting at the Z-Accelerator

Proceedings of SPIE - The International Society for Optical Engineering

Schwarz, Jens S.; Rambo, Patrick K.; Geissel, Matthias G.; Kimmel, Mark W.; Schollmeier, Marius; Smith, Ian C.; Bellum, John; Kletecka, Damon; Sefkow, Adam; Smith, Douglas; Athertona, Briggs

We discuss upgrades and development currently underway at the Z-Backlighter facility. Among them are a new optical parametric chirped pulse amplifier (OPCPA) front end, 94 cm × 42 cm multi layer dielectric (MLD) gratings, dichroic laser beam transport studies, 25 keV x-ray source development, and a major target area expansion. These upgrades will pave the way for short/long pulse, multi-frame, multi-color x-ray backlighting at the Z-Accelerator. © 2011 SPIE.

More Details

Formulation of chlorine and decontamination booster station optimization problem

World Environmental and Water Resources Congress 2011: Bearing Knowledge for Sustainability - Proceedings of the 2011 World Environmental and Water Resources Congress

Haxton, T.; Murray, R.; Hart, W.; Klise, K.; Phillips, Cynthia A.

A commonly used indicator of water quality is the amount of residual chlorine in a water distribution system. Chlorine booster stations are often utilized to maintain acceptable levels of residual chlorine throughout the network. In addition, hyper-chlorination has been used to disinfect portions of the distribution system following a pipe break. Consequently, it is natural to use hyper-chlorination via multiple booster stations located throughout a network to mitigate consequences and decontaminate networks after a contamination event. Many researchers have explored different methodologies for optimally locating booster stations in the network for daily operations. In this research, the problem of optimally locating chlorine booster stations to decontaminate following a contamination incident will be described. © 2011 ASCE.

More Details

Minimize impact or maximize benefit: The role of objective function in approximately optimizing sensor placement for municipal water distribution networks

World Environmental and Water Resources Congress 2011: Bearing Knowledge for Sustainability - Proceedings of the 2011 World Environmental and Water Resources Congress

Hart, William E.; Murray, Regan; Phillips, Cynthia A.

We consider the design of a sensor network to serve as an early warning system against a potential suite of contamination incidents. Given any measure for evaluating the quality of a sensor placement, there are two ways to model the objective. One is to minimize the impact or damage to the network, the other is to maximize the reduction in impact compared to the network without sensors. These objectives are the same when the problem is solved optimally. But when given equally-good approximation algorithms for each of this pair of complementary objectives, either one might be a better choice. The choice generally depends upon the quality of the approximation algorithms, the impact when there are no sensors, and the number of sensors available. We examine when each objective is better than the other by examining multiple real world networks. When assuming perfect sensors, minimizing impact is frequently superior for virulent contaminants. But when there are long response delays, or it is very difficult to reduce impact, maximizing impact reduction may be better. © 2011 ASCE.

More Details

A new wafer-level packaging technology for MEMS with hermetic micro-environment

Proceedings - Electronic Components and Technology Conference

Chanchani, Rajen C.; Nordquist, Christopher N.; Olsson, Roy H.; Peterson, T.C.; Shul, Randy J.; Ahlers, Catalina A.; Plut, Thomas A.; Patrizi, G.A.

We report a new wafer-level packaging technology for miniature MEMS in a hermetic micro-environment. The unique and new feature of this technology is that it only uses low cost wafer-level processes such as eutectic bonding, Bosch etching and mechanical lapping and thinning steps as compared to more expensive process steps that will be required in other alternative wafer-level technologies involving thru-silicon vias or membrane lids. We have demonstrated this technology by packaging silicon-based AlN microsensors in packages of size 1.3 1.3 mm2 and 200 micrometer thick. Our initial cost analysis has shown that when mass produced with high yields, this device will cost $0.10 to $0.90. The technology involves first preparing the lid and MEMS wafers separately with the sealring metal stack of Ti/Pt/Au on the MEMS wafers and Ti/Pt/Au/Ge/Au on the lid wafers. On the MEMS wafers, the Signal/Power/Ground interconnections to the wire-bond pads are isolated from the sealring metallization by an insulating AlN layer. Prior to bonding, the lid wafers were Bosch-etched in the wirebond pad area by 120 um and in the center hermetic device cavity area by 20 um. The MEMS and the lid wafers were then aligned and bonded in vacuum or in a nitrogen environment at or above the Au-Ge Eutectic temperature, 363C. The bonded wafers were then thinned and polished first on the MEMS side and then on the lid side. The MEMS side was thinned to 100 ums with a nearly scratch-free and crack-free surface. The lid side was similarly thinned to 100 ums exposing the wire-bond pads. After thinning, a 100 um thick lid remained over the MEMS features providing a 20 um high hermetic micro-environment. Thinned MEMS/Lid wafer-level assemblies were then sawed into individual devices. These devices can be integrated into the next-level assembly either by wire-bonding or by surface mounting. The wafer-level packaging approach developed in this project demonstrated RF Feedthroughs with 0.3 dB insertion loss and adequate RF performance through 2 GHz. Pressure monitoring Pirani structures built inside the hermetic lids have demonstrated the ability to detect leaks in the package. In our preliminary development experiments, we have demonstrated 50% hermetic yields. © 2011 IEEE.

More Details

Thin gold to gold bonding for flip chip applications

Proceedings - Electronic Components and Technology Conference

Rohwer, Lauren E.; Chu, Dahwey C.

We have demonstrated a solderless flip chip bonding process that utilizes electroless nickel / palladium, immersion gold pad metallization. This mask-less process enables higher interconnect densities than can be achieved with standard solder bump reflow. The thin (100nm) immersion gold surfaces were coated with dodecanethiol self-assembled monolayers. Strong gold to gold bonds were formed at 185°C with shear strengths that exceed Mil-Std 883 requirements. Gold stud bumps are also promising for flip chip applications, and can be bonded at 150°C when the gold surfaces are properly pre-treated dilute piranha solution, argon plasma, and dodecanethiol SAM treatments work equally well. © 2011 IEEE.

More Details

AN expansion tester for bounded degree graphs

SIAM Journal on Computing

Kale, Satyen; Seshadhri, C.

We consider the problem of testing graph expansion (either vertex or edge) in the bounded degree model [O. Goldreich and D. Ron, On Testing Expansion in Bounded-Degree Graphs, Technical report TR00-020, ECCC, Potsdam, Germany, 2000]. We give a property tester that takes as input a graph with degree bound d, an expansion bound α, and a parameter ε > 0. The tester accepts the graph with high probability if its expansion is more than a, and rejects it with high probability if it is ε-far from any graph with expansion α' with degree bound d, where α' < α is a function of α. For edge expansion, we obtain α' = Ω(α2/d ), and for vertex expansion, we obtain α' = Ω(α2/d2 ). In either case, the algorithm runs in time Õ(n (1+μ)/2d2/εα2) for any fixed μ > 0. © 2011 Society for Industrial and Applied Mathematics.

More Details

Communications-based automated assessment of team cognitive performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Lakkaraju, Kiran; Adams, Susan S.; Abbott, Robert G.; Forsythe, James C.

In this paper we performed analysis of speech communications in order to determine if we can differentiate between expert and novice teams based on communication patterns. Two pairs of experts and novices performed numerous test sessions on the E-2 Enhanced Deployable Readiness Trainer (EDRT) which is a medium-fidelity simulator of the Naval Flight Officer (NFO) stations positioned at bank end of the E-2 Hawkeye. Results indicate that experts and novices can be differentiated based on communication patterns. First, experts and novices differ significantly with regard to the frequency of utterances, with both expert teams making many fewer radio calls than both novice teams. Next, the semantic content of utterances was considered. Using both manual and automated speech-to-text conversion, the resulting text documents were compared. For 7 of 8 subjects, the two most similar subjects (using cosine-similarity of term vectors) were in the same category of expertise (novice/expert). This means that the semantic content of utterances by experts was more similar to other experts, than novices, and vice versa. Finally, using machine learning techniques we constructed a classifier that, given as input the text of the speech of a subject, could identify whether the individual was an expert or novice with a very low error rate. By looking at the parameters of the machine learning algorithm we were also able to identify terms that are strongly associated with novices and experts. © 2011 Springer-Verlag.

More Details

Using computational modeling to assess use of cognitive strategies

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Haass, Michael J.; Matzen, Laura E.

Although there are many strategies and techniques that can improve memory, cognitive biases generally lead people to choose suboptimal memory strategies. In this study, participants were asked to memorize words while their brain activity was recorded using electroencephalography (EEG). The participants' memory performance and EEG data revealed that a self-testing (retrieval practice) strategy could improve memory. The majority of the participants did not use self-testing, but computational modeling revealed that a subset of the participants had brain activity that was consistent with this optimal strategy. We developed a model that characterized the brain activity associated with passive study and with explicit memory testing. We used that model to predict which participants adopted a self-testing strategy, and then evaluated the behavioral performance of those participants. This analysis revealed that, as predicted, the participants whose brain activity was consistent with a self-testing strategy had better memory performance at test. © 2011 Springer-Verlag.

More Details

Cultural neuroscience and individual differences: Implications for augmented cognition

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Matzen, Laura E.

Technologies that augment human cognition have the potential to enhance human performance in a wide variety of domains. However, there are a number of individual differences in brain activity that must be taken into account during the development, validation, and application of augmented cognition tools. A growing body of research in cultural neuroscience has shown that there are substantial differences in how people from different cultural backgrounds approach various cognitive tasks. In addition, there are many other types of individual differences and even changes in a single individual over time that have implications for augmented cognition research and development. The aim of this session is to highlight a few of those differences and to discuss how they might impact augmented cognition technologies. © 2011 Springer-Verlag.

More Details

Communications-based automated assessment of team cognitive performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Lakkaraju, Kiran L.; Adams, Susan S.; Abbott, Robert G.; Forsythe, James C.

In this paper we performed analysis of speech communications in order to determine if we can differentiate between expert and novice teams based on communication patterns. Two pairs of experts and novices performed numerous test sessions on the E-2 Enhanced Deployable Readiness Trainer (EDRT) which is a medium-fidelity simulator of the Naval Flight Officer (NFO) stations positioned at bank end of the E-2 Hawkeye. Results indicate that experts and novices can be differentiated based on communication patterns. First, experts and novices differ significantly with regard to the frequency of utterances, with both expert teams making many fewer radio calls than both novice teams. Next, the semantic content of utterances was considered. Using both manual and automated speech-to-text conversion, the resulting text documents were compared. For 7 of 8 subjects, the two most similar subjects (using cosine-similarity of term vectors) were in the same category of expertise (novice/expert). This means that the semantic content of utterances by experts was more similar to other experts, than novices, and vice versa. Finally, using machine learning techniques we constructed a classifier that, given as input the text of the speech of a subject, could identify whether the individual was an expert or novice with a very low error rate. By looking at the parameters of the machine learning algorithm we were also able to identify terms that are strongly associated with novices and experts. © 2011 Springer-Verlag.

More Details

Orienting lipid domains in giant vesicles using an electric field

Chemical Communications

Zendejas, Frank Z.; Meagher, Robert M.; Stachowiak, Jeanne C.; Hayden, Carl C.; Sasaki, Darryl Y.

Directing the orientation of molecular assemblies is a key step toward creating complex hierarchical structures that yield higher order functional materials. Here, we demonstrate the directed orientation of functionalized lipid domains and protein-membrane assemblies, using an electric field. © 2011 The Royal Society of Chemistry.

More Details

A game theoretic bidding agent for the ad auction game

ICAART 2011 - Proceedings of the 3rd International Conference on Agents and Artificial Intelligence

Vorobeychik, Yevgeniy V.

TAC/AA (ad auction game) provides a forum for research into strategic bidding in keyword auctions to try out their ideas in an independently simulated setting. We describe an agent that successfully competed in the TAC/AA game, showing in the process how to operationalize game theoretic analysis to develop a very simple, yet highly competent agent. Specifically, we use simulation-based game theory to approximate equilibria in a restricted bidding strategy space, assess their robustness in a normative sense, and argue for relative plausibility of equilibria based on an analogy to a common agent design methodology. Finally, we offer some evidence for the efficacy of equilibrium predictions based on TAC/AA tournament data.

More Details

Clutter locus equation for more general linear array orientation

Proceedings of SPIE - The International Society for Optical Engineering

Bickel, Douglas L.

The clutter locus is an important concept in space-time adaptive processing (STAP) for ground moving target indicator (GMTI) radar systems. The clutter locus defines the expected ground clutter location in the angle-Doppler domain. Typically in literature, the clutter locus is presented as a line, or even a set of ellipsoids, under certain assumptions about the geometry of the array. Most often, the array is assumed to be in the horizontal plane containing the velocity vector. This paper will give a more general 3-dimensional interpretation of the clutter locus for a general linear array orientation. © 2010 SPIE.

More Details

Radar cross section statistics of dismounts at Ku-band

Proceedings of SPIE - The International Society for Optical Engineering

Raynal, Ann M.; Burns, Bryan L.; Verge, Tobias J.; Bickel, Douglas L.; Dunkel, Ralf; Doerry, Armin

Knowing the statistical characteristics of a target's radar cross-section (RCS) is crucial to the success of radar target detection algorithms. A wide range of applications currently exist for dismount (i.e. human body) detection and monitoring using ground-moving target indication (GMTI) radar systems. Dismounts are particularly challenging to detect. Their RCS is orders of magnitude lower than traditional GMTI targets, such as vehicles. Their velocity of about 0 to 1.5 m/s is also much slower than vehicular targets. Studies regarding the statistical nature of the RCS of dismounts focus primarily on simulations or very limited empirical data at specific frequencies. This paper seeks to enhance the existing body of work on dismount RCS statistics at Ku-band, which is currently lacking, and has become an important band for such remote sensing applications. We examine the RCS probability distributions of different sized humans in various stances, across aspect and elevation angle, for horizontal (HH) and vertical (VV) transmit/receive polarizations, and at diverse resolutions, using experimental data collected at Ku-band. We further fit Swerling target models to the RCS distributions and suggest appropriate detection thresholds for dismounts in this band. © 2010 SPIE.

More Details

VM-based slack emulation of large-scale systems

Proceedings of the 1st International Workshop on Runtime and Operating Systems for Supercomputers, ROSS 2011

Bridges, Patrick G.; Arnold, Dorian; Pedretti, Kevin P.

This paper describes the design of a system to enable large-scale testing of new software stacks and prospective high-end computing architectures. The proposed architecture combines system virtualization, time dilation, architectural simulation, and slack simulation to provide scalable emulation of hypothetical systems. We also describe virtualization-based full-system measurement and monitoring tools to aid in using the proposed system for co-design of high-performance computing system software and architectural features for future systems. Finally, we provide a description of the implementation strategy and status of the proposed system. © 2011 ACM.

More Details

Advanced core/multishell germanium/silicon nanowire heterostructures: The Au-diffusion bottleneck

Applied Physics Letters

Dayeh, Shadi A.; MacK, Nathan H.; Huang, Jian Y.; Picraux, S.T.

Synthesis of germanium/silicon (Ge/Si) core/shell nanowire heterostructures is typically accompanied by unwanted gold (Au) diffusion on the Ge nanowire sidewalls, resulting in rough surface morphology, undesired whisker growth, and detrimental performance of electronic devices. Here, we advance understanding of this Au diffusion on nanowires, its diameter dependence and its kinetic origin. We devise a growth procedure to form a blocking layer between the Au seed and Ge nanowire sidewalls leading to elimination the Au diffusion for in situ synthesis of high quality Ge/Si core/shell heterostructures. © 2011 American Institute of Physics.

More Details

Differential imaging microscopy of physically complex surfaces undergoing atmospheric corrosion

NACE - International Corrosion Conference Series

Enos, David E.; Girard, Gerald R.

Frequently, optical observation of component materials is the only viable technique to evaluate degradation processes in-situ. Unfortunately, due to the visually complex nature of many surfaces (e.g., scratches, occlusions, etc.), the degradation process, particularly at early stages, is difficult or impossible to resolve. As a result, studies are limited to evaluating degradation well after initiation has taken place. Thus, there is a need for a technique that could be implemented utilizing image processing that allows the de-convolution of changes due to the degradation process of interest from the background "noise". An automated differential imaging system was constructed for in-situ studies of both aqueous and atmospheric environments to fulfill this need. The basic functionality of the Differential imaging system was demonstrated on gold plated copper and gold /nickel plated copper coupons exposed to a sulfide containing atmosphere. © 2011 by NACE International.

More Details

Differential imaging microscopy of physically complex surfaces undergoing atmospheric corrosion

NACE - International Corrosion Conference Series

Enos, David E.; Girard, Gerald R.

Frequently, optical observation of component materials is the only viable technique to evaluate degradation processes in-situ. Unfortunately, due to the visually complex nature of many surfaces (e.g., scratches, occlusions, etc.), the degradation process, particularly at early stages, is difficult or impossible to resolve. As a result, studies are limited to evaluating degradation well after initiation has taken place. Thus, there is a need for a technique that could be implemented utilizing image processing that allows the de-convolution of changes due to the degradation process of interest from the background "noise". An automated differential imaging system was constructed for in-situ studies of both aqueous and atmospheric environments to fulfill this need. The basic functionality of the Differential imaging system was demonstrated on gold plated copper and gold /nickel plated copper coupons exposed to a sulfide containing atmosphere. © 2011 by NACE International.

More Details

Controlling thermal conductance through quantum dot roughening at interfaces

Physical Review B - Condensed Matter and Materials Physics

Hopkins, Patrick E.; Duda, John C.; Petz, Christopher W.; Floro, Jerrold A.

We examine the fundamental phonon mechanisms affecting the interfacial thermal conductance across a single layer of quantum dots (QDs) on a planar substrate. We synthesize a series of GexSi1-x QDs by heteroepitaxial self-assembly on Si surfaces and modify the growth conditions to provide QD layers with different root-mean-square (rms) roughness levels in order to quantify the effects of roughness on thermal transport. We measure the thermal boundary conductance (hK) with time-domain thermoreflectance. The trends in thermal boundary conductance show that the effect of the QDs on hK are more apparent at elevated temperatures, while at low temperatures, the QD patterning does not drastically affect hK. The functional dependence of hK with rms surface roughness reveals a trend that suggests that both vibrational mismatch and changes in the localized phonon transport near the interface contribute to the reduction in h K. We find that QD structures with rms roughnesses greater than 4 nm decrease hK at Si interfaces by a factor of 1.6. We develop an analytical model for phonon transport at rough interfaces based on a diffusive scattering assumption and phonon attenuation that describes the measured trends in hK. This indicates that the observed reduction in thermal conductivity in SiGe quantum dot superlattices is primarily due to the increased physical roughness at the interfaces, which creates additional phonon resistive processes beyond the interfacial vibrational mismatch. © 2011 American Physical Society.

More Details

Fabrication of a nanostructure thermal property measurement platform

Nanotechnology

Harris, C.T.; Martinez, Julio M.; Shaner, Eric A.; Huang, Jian Y.; Swartzentruber, Brian S.; Sullivan, J.P.; Chen, G.

Measurements of the electrical and thermal transport properties of one-dimensional nanostructures (e.g.nanotubes and nanowires) are typically obtained without detailed knowledge of the specimen's atomic-scale structure or defects. To address this deficiency, we have developed a microfabricated, chip-based characterization platform that enables both transmission electron microscopy (TEM) of the atomic structure and defects as well as measurement of the thermal transport properties of individual nanostructures. The platform features a suspended heater line that physically contacts the center of a suspended nanostructure/nanowire that was placed using insitu scanning electron microscope nanomanipulators. Suspension of the nanostructure across a through-hole enables TEM characterization of the atomic and defect structure (dislocations, stacking faults, etc) of the test sample. This paper explains, in detail, the processing steps involved in creating this thermal property measurement platform. As a model study, we report the use of this platform to measure the thermal conductivity and defect structure of a GaN nanowire. © 2011 IOP Publishing Ltd.

More Details

Influence of solvent size on the mechanical properties and rheology of polydimethylsiloxane-based polymeric gels

Polymer

Mrozek, Randy A.; Cole, Phillip J.; Otim, Kathryn J.; Shull, Kenneth R.; Lenhart, Joseph L.

Soft polymeric gels have utility in a broad range of medical, industrial, and military applications, which has led to an extensive research investment over the past several decades. While most gel research exploits a cross-linked polymer network swollen with small molecule solvents, this article systematically investigates the impact of the solvent molecular weight on the resulting gel mechanical properties. The model polymer gel was composed of a chemically cross-linked polydimethylsiloxane (PDMS) network loaded with a non-reactive PDMS solvent. In addition to investigating the impact of solvent loading, the solvent molecular weight was varied from 423,000 g/mol to 1250 g/mol, broadly spanning the molecular weight of entanglement for PDMS (MW ENT ∼29,000 g/mol). The gels exhibited a strong frequency dependent mechanical response when the solvent molecular weight >MW ENT. In addition, scaling factors of shear storage modulus versus solvent loading displayed a distinct decrease from the theoretical value for networks formed in a theta solvent of 2.3 with increasing measurement frequency and solvent molecular weight. The frequency dependent shear storage modulus could be shifted by the ratio of solvent molecular weights to the 3.4 power to form a master curve at a particular solvent loading indicating that mobility of entangled solvent plays a critical role for the mechanical response. In addition, the incorporation of entangled solvent can increase the toughness of the PDMS gels. © 2011 Elsevier Ltd. All rights reserved.

More Details

Improving CSE software through reproducibility requirements

Proceedings - International Conference on Software Engineering

Heroux, Michael A.

It is often observed that software engineering (SE) processes and practices for computational science and engineering (CSE) lag behind other SE areas [7]. This issue has been a concern for funding agencies, since new research increasingly relies upon and produces computational tools. At the same time, CSE research organizations find it difficult to prescribe formal SE practices for funded projects. Theoretical and experimental science rely heavily on independent verification of results as part of the scientific process. Computational science should have the same regard for independent verification but it does not. In this paper, we present an argument for using reproducibility and independent verification requirements as a driver to improve SE processes and practices. We describe existing efforts that support our argument, how these requirements can impact SE, challenges we face, and new opportunities for using reproducibility requirements as a driver for higher quality CSE software. Copyright 2011 ACM.

More Details

Laser ignition of multi-injection gasoline sprays

SAE 2011 World Congress and Exhibition

Genzale, Caroline L.; Pickett, Lyle M.; Hoops, Alexandra A.; Headrick, Jeffrey M.

Laser plasma ignition has been pursued by engine researchers as an alternative to electric spark-ignition systems, potentially offering benefits by avoiding quenching surfaces and extending breakdown limits at higher boost pressure and lower equivalence ratio. For this study, we demonstrate another potential benefit: the ability to control the timing of ignition with short, nanosecond pulses, thereby optimizing the type of mixture that burns in rapidly changing, stratified fuel-air mixtures. We study laser ignition at various timings during single and double injections at simulated gasoline engine conditions within a controlled, high-temperature, high-pressure vessel. Laser ignition is accomplished with a single low-energy (10 mJ), short duration (8 ns) Nd:YAG laser beam that is tightly focused (0.015 mm average measured 1/ e 2 diameter) at a typical GDI spark plug location. Ignition timing is varied during, after, and between injections of a rapid, 0.4-ms/0.35-ms dwell/0.4-ms injection schedule. Results show success in igniting a single injection after the end of injection, but with poor combustion efficiency because the flame does not move downstream to earlier-injected charge. Findings are similar when igniting after the end of a double injection. Best results are observed when igniting between injections. The tail of the first injection ignites, and the second injection acts to pull the flame downstream to the first-injection charge, causing high combustion efficiency. However, the timing of ignition between pulses is critical. If ignited too soon after the end of the first injection, ignition may fail or, if ignition succeeds, the flame grows such that it immediately ignites the second injection, forming fuel-rich combustion and significant soot generation. The optimal timing produces no soot formation, but still maintains high combustion efficiency. However, accomplishment of this timing requires ignition timing control on the order of 0.1 ms, which is much shorter than current electric spark ignition systems that have spark durations on the order of 1.0 ms. Therefore, the benefits of this double-injection ignition strategy are only realized with the use of a short-pulse laser ignition system. © 2011 SAE International.

More Details

Interaction of intake-induced flow and injection jet in a direct-injection hydrogen-fueled engine measured by PIV

SAE 2011 World Congress and Exhibition

Salazar, Victor; Kaiser, Sebastian

The in-cylinder charge motion during the compression stroke of an optically accessible engine equipped with direct injection of hydrogen fuel is measured via particle image velocimetry (PIV). The evolution of the mean flow field and the tumble ratio are examined with and without injection, each with the unmodified 4-valve pent-roof engine head and with the intake ports modified to yield higher tumble. The measurements in the vertical symmetry plane of the cylinder show that intake modification produces the desired drastic increase in tumble flow, changing the tumble ratio at BDC from 0.22 to 0.70. Either intake-induced flow is completely disrupted by the high-pressure hydrogen injection from an angled, centrally located single-hole nozzle. The injection event leads to sudden reversal of the tumble. Hence the tumble ratio is negative after injection. However, the two intake configurations still differ in tumble ratio by about the same magnitude as before injection. Cyclic variability of the tumble ratio is similar for high and low-tumble cases, and in each case injection increases variability from about 0.4 units to 1 unit, remaining roughly constant throughout the compression stroke. Through its counter-flowing action, high pre-injection tumble modifies the spatial structure of the post-injection flow and reduces the peak mean velocities near the end of the compression stroke. Without injection, the magnitude of the velocity field's ensemble root-mean-square (RMS) is generally greater for high tumble, while with injection the low-tumble case exhibits higher RMS towards the end of the compression stroke. © 2011 SAE International.

More Details

Regional Economic Accounting (REAcct). A software tool for rapidly approximating economic impacts

Ehlen, Mark E.; Starks, Shirley J.

This paper describes the Regional Economic Accounting (REAcct) analysis tool that has been in use for the last 5 years to rapidly estimate approximate economic impacts for disruptions due to natural or manmade events. It is based on and derived from the well-known and extensively documented input-output modeling technique initially presented by Leontief and more recently further developed by numerous contributors. REAcct provides county-level economic impact estimates in terms of gross domestic product (GDP) and employment for any area in the United States. The process for using REAcct incorporates geospatial computational tools and site-specific economic data, permitting the identification of geographic impact zones that allow differential magnitude and duration estimates to be specified for regions affected by a simulated or actual event. Using these data as input to REAcct, the number of employees for 39 directly affected economic sectors (including 37 industry production sectors and 2 government sectors) are calculated and aggregated to provide direct impact estimates. Indirect estimates are then calculated using Regional Input-Output Modeling System (RIMS II) multipliers. The interdependent relationships between critical infrastructures, industries, and markets are captured by the relationships embedded in the inputoutput modeling structure.

More Details

Nuclear containment steel liner corrosion workshop : final summary and recommendation report

Erler, Bryan A.; Weyers, Richard E.; Sagues, Alberto; Petti, Jason P.; Berke, Neal S.; Naus, Dan J.

This report documents the proceedings of an expert panel workshop conducted to evaluate the mechanisms of corrosion for the steel liner in nuclear containment buildings. The U.S. Nuclear Regulatory Commission (NRC) sponsored this work which was conducted by Sandia National Laboratories. A workshop was conducted at the NRC Headquarters in Rockville, Maryland on September 2 and 3, 2010. Due to the safety function performed by the liner, the expert panel was assembled in order to address the full range of issues that may contribute to liner corrosion. This report is focused on corrosion that initiates from the outer surface of the liner, the surface that is in contact with the concrete containment building wall. Liner corrosion initiating on the outer diameter (OD) surface has been identified at several nuclear power plants, always associated with foreign material left embedded in the concrete. The potential contributing factors to liner corrosion were broken into five areas for discussion during the workshop. Those include nuclear power plant design and operation, corrosion of steel in contact with concrete, concrete aging and degradation, concrete/steel non-destructive examination (NDE), and concrete repair and corrosion mitigation. This report also includes the expert panel member's recommendations for future research.

More Details

Minimal-overhead virtualization of a large scale supercomputer

ACM SIGPLAN Notices

Lange, John R.; Pedretti, Kevin P.; Dinda, Peter; Bae, Chang; Bridges, Patrick G.; Soltero, Philip; Merritt, Alexander

Virtualization has the potential to dramatically increase the usability and reliability of high performance computing (HPC) systems. However, this potential will remain unrealized unless overheads can be minimized. This is particularly challenging on large scale machines that run carefully crafted HPC OSes supporting tightlycoupled, parallel applications. In this paper, we show how careful use of hardware and VMM features enables the virtualization of a large-scale HPC system, specifically a Cray XT4 machine, with .5% overhead on key HPC applications, microbenchmarks, and guests at scales of up to 4096 nodes. We describe three techniques essential for achieving such low overhead: passthrough I/O, workload-sensitive selection of paging mechanisms, and carefully controlled preemption. These techniques are forms of symbiotic virtualization, an approach on which we elaborate. Copyright © 2011 ACM.

More Details

Tomographic spectral imaging: Data acquisition and analysis via multivariate statistical analysis

JOM

Kotula, Paul G.; Sorensen, Neil R.

Tomographic spectral imaging is a powerful technique for the three-dimensional (3-D) analysis of materials. Using a focused ion-beam/scanning electron microscope equipped with an x-ray spectrometer, 3-D microanalysis can be performed on individual regions of a sample, such as defects, with microanalytical spatial resolution of better than 300 nm typically. The focused ion-beam can serially section at comparable thicknesses to sequentially reveal new analytical surfaces within the specimen. After each slice a full 2-spatial dimension spectral image, consisting of a complete spectrum at each point in the 2-D array, is acquired with the scanning electron microscope/energy-dispersive x-ray spectrometer on the same platform. The process is repeated multiple times to result in a 3-D or tomographic spectral image. The challenge is to effectively and efficiently analyze the tomographic spectral image to extract chemical phase distributions. Therefore, automated multivariate statistical analysis methods were developed and applied to these images. Sandia's Automated eXpert Spectral Image Analysis multivariate statistical analysis software requires no a priori information to find even very weak signals hidden in the data sets. The result of the analysis is a small number of chemical components which describe the 3-D phase distribution in the volume of material sampled. These 3-D phases can then be effectively visualized with off-the-shelf 3-D rendering software. © 2011 TMS.

More Details

Online geometric reconstruction

Journal of the ACM

Chazelle, Bernard; Seshadhri, C.

We investigate a new class of geometric problems based on the idea of online error correction. Suppose one is given access to a large geometric dataset though a query mechanism; for example, the dataset could be a terrain and a query might ask for the coordinates of a particular vertex or for the edges incident to it. Suppose, in addition, that the dataset satisfies some known structural property P (for example, monotonicity or convexity) but that, because of errors and noise, the queries occasionally provide answers that violate P. Can one design a filter that modifies the query's answers so that (i) the output satisfies P; (ii) the amount of data modification is minimized? We provide upper and lower bounds on the complexity of online reconstruction for convexity in 2D and 3D. © 2011 ACM.

More Details

Effect of soluble polymer binder on particle distribution in a drying particulate coating

Journal of Colloid and Interface Science

Buss, Felix; Roberts, Christine C.; Crawford, Kathleen S.; Peters, Katharina; Francis, Lorraine F.

Soluble polymer is frequently added to inorganic particle suspensions to provide mechanical strength and adhesiveness to particulate coatings. To engineer coating microstructure, it is essential to understand how drying conditions and dispersion composition influence particle and polymer distribution in a drying coating. Here, a 1D model revealing the transient concentration profiles of particles and soluble polymer in a drying suspension is proposed. Sedimentation, evaporation and diffusion govern particle movement with the presence of soluble polymer influencing the evaporation rate and solution viscosity. Results are summarized in drying regime maps that predict particle accumulation at the free surface or near the substrate as conditions vary. Calculations and experiments based on a model system of poly(vinyl alcohol) (PVA), silica particles and water reveal that the addition of PVA slows the sedimentation and diffusion of the particles during drying such that accumulation of particles at the free surface is more likely. © 2011 Elsevier Inc.

More Details

Design of model-friendly turbulent non-premixed jet burners for C 2+ hydrocarbon fuels

Review of Scientific Instruments

Zhang, Jiayao; Shaddix, Christopher R.; Schefer, Robert W.

Experimental measurements in laboratory-scale turbulent burners with well-controlled boundary and flow configurations can provide valuable data for validating models of turbulence-chemistry interactions applicable to the design and analysis of practical combustors. This paper reports on the design of two canonical nonpremixed turbulent jet burners for use with undiluted gaseous and liquid hydrocarbon fuels, respectively. Previous burners of this type have only been developed for fuels composed of H2, CO, andor methane, often with substantial dilution. While both new burners are composed of concentric tubes with annular pilot flames, the liquid-fuel burner has an additional fuel vaporization step and an electrically heated fuel vapor delivery system. The performance of these burners is demonstrated by interrogating four ethylene flames and one flame fueled by a simple JP-8 surrogate. Through visual observation, it is found that the visible flame lengths show good agreement with standard empirical correlations. Rayleigh line imaging demonstrates that the pilot flame provides a spatially homogeneous flow of hot products along the edge of the fuel jet. Planar imaging of OH laser-induced fluorescence reveals a lack of local flame extinction in the high-strain near-burner region for fuel jet Reynolds numbers (Re) less than 20 000, and increasingly common extinction events for higher jet velocities. Planar imaging of soot laser-induced incandescence shows that the soot layers in these flames are relatively thin and are entrained into vortical flow structures in fuel-rich regions inside of the flame sheet. © 2011 American Institute of Physics.

More Details

High-power all-fiber passively Q-switched laser using a doped fiber as a saturable absorber: Numerical simulations

Optics Letters

Soh, Daniel B.; Bisson, Scott E.; Patterson, Brian D.; Moore, Sean M.

We report a design for a power-scalable all-fiber passively Q-switched laser that uses a large mode area Yb-doped fiber as a gain medium adiabatically tapered to an unpumped single-mode Yb-doped fiber, which serves as a saturable absorber. Through the use of a comprehensive numerical simulator, we demonstrate a passively Q-switched 1030nm pulsed laser with 14 ns pulse duration and 0:5 mJ pulse energy operating at 200 kHz repetition rate. The proposed configuration has a potential for orders of magnitude of improvement in both the pulse energies and durations compared to the previously reported result. The key mechanism for this improvement relates to the ratio of the core areas between the pumped inverted large mode area gain fiber and the unpumped doped singlemode fiber. © 2011 Optical Society of America.

More Details
Results 66001–66200 of 96,771
Results 66001–66200 of 96,771