A Brain-Emulating Cognition and Control Architecture (BECCA) is presented. It is consistent with the hypothesized functions of pervasive intra-cortical and cortico-subcortical neural circuits. It is able to reproduce many salient aspects of human voluntary movement and motor learning. It also provides plausible mechanisms for many phenomena described in cognitive psychology, including perception and mental modeling. Both "inputs" (afferent channels) and "outputs"' (efferent channels) are treated as neural signals; they are all binary (either on or off) and there is no meaning, information, or tag associated with any of them. Although BECCA initially has no internal models, it learns complex interrelations between outputs and inputs through which it bootstraps a model of the system it is controlling and the outside world. BECCA uses two key algorithms to accomplish this: S-Learning and Context-Based Similarity (CBS).
A sub-scale experiment has been constructed using fins mounted on one wall of a transonic wind tunnel to investigate the influence of fin trailing vortices upon downstream control surfaces. Data are collected using a fin balance instrumenting the downstream fin to measure the aerodynamic forces of the interaction, combined with stereoscopic Particle Image Velocimetry to determine vortex properties. The fin balance data show that the response of the downstream fin essentially is shifted from the baseline single-fin data dependent upon the angle of attack of the upstream fin. Freestream Mach number and the spacing between fins have secondary effects. The velocimetry shows that the vortex strength increases markedly with upstream fin angle of attack, though even an uncanted fin generates a noticeable wake. No variation with Mach number can be discerned in the normalized velocity data. Correlations between the force data and the velocimetry suggest that the interaction is fundamentally a result of an angle of attack superposed upon the downstream fin by the vortex shed from the upstream fin tip. The Mach number influence arises from differing vortex lift on the leading edge of the downstream fin even when the impinging vortex is Mach invariant.
An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in addressing real-world "wickedly difficult" challenges. Previous laboratory research has engaged small groups of students in answering questions irrelevant to an industrial setting. The current experiment extended this research to larger, real-world employee groups engaged in addressing organizationrelevant challenges. Within the present experiment, the data demonstrated that individuals performed at least as well as groups in terms of number of ideas produced and significantly (p<.02) outperformed groups in terms of the quality of those ideas (as measured along the dimensions of originality, feasibility, and effectiveness).
Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS'08 - "Personalized Healthcare through Technology"
The rich history of scalable computing research owes much to a rapid rise in computing platform scale in terms of size and speed. As platforms evolve, so must algorithms and the software expressions of those algorithms. Unbridled growth in scale inevitably leads to complexity. This special issue grapples with two facets of this complexity: scalable execution and scalable development. The former results from efficient programming of novel hardware with increasing numbers of processing units (e.g., cores, processors, threads or processes). The latter results from efficient development of robust, flexible software with increasing numbers of programming units (e.g., procedures, classes, components or developers). The progression in the above two parenthetical lists goes from the lowest levels of abstraction (hardware) to the highest (people). This issue's theme encompasses this entire spectrum. The lead author of each article resides in the Scalable Computing Research and Development Department at Sandia National Laboratories in Livermore, CA. Their co-authors hail from other parts of Sandia, other national laboratories and academia. Their research sponsors include several programs within the Department of Energy's Office of Advanced Scientific Computing Research and its National Nuclear Security Administration, along with Sandia's Laboratory Directed Research and Development program and the Office of Naval Research. The breadth of interests of these authors and their customers reflects in the breadth of applications this issue covers. This article demonstrates how to obtain scalable execution on the increasingly dominant high-performance computing platform: a Linux cluster with multicore chips. The authors describe how deep memory hierarchies necessitate reducing communication overhead by using threads to exploit shared register and cache memory. On a matrix-matrix multiplication problem, they achieve up to 96% parallel efficiency with a three-part strategy: intra-node multithreading, non-blocking inter-node message passing, and a dedicated communications thread to facilitate concurrent communications and computations. On a quantum chemistry problem, they spawn multiple computation threads and communication threads on each node and use one-sided communications between nodes to minimize wait times. They reduce software complexity by evolving a multi-threaded factory pattern in C++ from a working, message-passing program in C.
A coupled Euler-Lagrange solution approach is used to model the response of a buried reinforced concrete structure subjected to a close-in detonation of a high explosive charge. The coupling algorithm is discussed along with a set of benchmark calculations involving detonations in clay and sand.
A total system performance assessment (TSPA) model has been developed to analyze the ability of the natural and engineered barriers of the Yucca Mountain repository to isolate nuclear waste over the period following repository closure. The principal features of the engineered barrier system are emplacement tunnels (or "drifts") containing a two-layer waste package (WP) for waste containment and a titanium drip shield to protect the WP from seeping water and falling rock. The 25-mm-thick outer shell of the WP is composed of Alloy 22, a highly corrosion-resistant nickel-based alloy. There are five nominal degradation modes of the Alloy 22: general corrosion, microbially influenced corrosion, stress corrosion cracking, early failure due to manufacturing defects, and localized corrosion (LC). This paper specifically examines the incorporation of the Alloy 22 LC model into the Yucca Mountain TSPA model, particularly the abstraction and modeling methodology, as well as issues dealing with scaling, spatial variability, uncertainty, and coupling to other submodels that are part of the total system model, such as the submodel for seepage water chemistry.
Future energy systems based on gasification of coal or biomass for co-production of electrical power and fuels may require gas turbine operation on unusual gaseous fuel mixtures. In addition, global climate change concerns may dictate the generation of a CO2 product stream for end-use or sequestration, with potential impacts on the oxidizer used in the gas turbine. In this study the operation at atmospheric pressure of a small, optically accessible swirl-stabilized premixed combustor, burning fuels ranging from pure methane to conventional and H2-rich and H2-lean syngas mixtures is investigated. Both air and CO2-diluted oxygen are used as oxidizers. CO and NOx emissions for these flames have been determined from the lean blowout limit to slightly rich conditions (1.03). In practice, CO2-diluted oxygen systems will likely be operated close to stoichiometric conditions to minimize oxygen consumption while achieving acceptable NOx performance. The presence of hydrogen in the syngas fuel mixtures results in more compact, higher temperature flames, resulting in increased flame stability and higher NOx emissions. Consistent with previous experience, the stoichiometry of lean blowout decreases with increasing H2 content in the syngas. Similarly, the lean stoichiometry at which CO emissions become significant decreases with increasing H2 content. For the mixtures investigated, CO emissions near the stoichiometric point do not become significant until 0.95. At this stoichiometric limit, CO emissions rise more rapidly for combustion in O2-CO2 mixtures than for combustion in air.
A method to measure interfacial mechanical properties at high temperatures and in a controlled atmosphere has been developed to study anodized aluminum surface coatings at temperatures where the interior aluminum alloy is molten. This is the first time that the coating strength has been studied under these conditions. We have investigated the effects of ambient atmosphere, temperature, and surface finish on coating strength for samples of aluminum alloy 7075. Surprisingly, the effective Young's modulus or strength of the coating when tested in air was twice as high as when samples were tested in an inert nitrogen or argon atmosphere. Additionally, the effective Young's modulus of the anodized coating increased with temperature in an air atmosphere but was independent of temperature in an inert atmosphere. The effect of surface finish was also examined. Sandblasting the surface prior to anodization was found to increase the strength of the anodized coating with the greatest enhancement noted for a nitrogen atmosphere. Machining marks were not found to significantly affect the strength.