We present a novel class of dynamic neural networks that is capable of learning, in an unsupervised manner, attractors that correspond to generalities in a data set. Upon presentation of a test stimulus, the networks follow a sequence of attractors that correspond to subsets of increasing size or generality in the original data set. The networks, inspired by those of the insect antennal lobe, build upon a modified Hopfield network in which nodes are periodically suppressed, global inhibition is gradually strengthened, and the weight of input neurons is gradually decreased relative to recurrent connections. This allows the networks to converge on a Hopfield network's equilibrium within each suppression cycle, and to switch between attractors in between cycles. The fast mutually reinforcing excitatory connections that dominate dynamics within cycles ensures the robust error-tolerant behavior that characterizes Hopfield networks. The cyclic inhibition releases the network from what would otherwise be stable equilibriums or attractors. Increasing global inhibition and decreasing dependence on the input leads successive attractors to differ, and to display increasing generality. As the network is faced with stronger inhibition, only neurons connected with stronger mutually excitatory connections will remain on; successive attractors will consist of sets of neurons that are more strongly correlated, and will tend to select increasingly generic characteristics of the data. Using artificial data, we were able to identify configurations of the network that appeared to produce a sequence of increasingly general results. The next logical steps are to apply these networks to suitable real-world data that can be characterized by a hierarchy of increasing generality and observe the network's performance. This report describes the work, data, and results, the current understanding of the results, and how the work could be continued. The code, data, and preliminary results are included and are available as an archive.
We developed an Augmented Musculature Device (AMD) that assists the movements of its wearer. It has direct application to aiding military and law enforcement personnel, the neurologically impaired, or those requiring any type of cybernetic assistance. The AMD consists of a collection of artificial muscles, each individually actuated, strategically placed along the surface of the human body. The actuators employed by the AMD are known as 'air muscles' and operate pneumatically. They are commercially available from several vendors and are relatively inexpensive. They have a remarkably high force-to-weight ratio--as high as 400:1 (as compared with 16:1 typical of DC motors). They are flexible and elastic, even when powered, making them ideal for interaction with humans.
The challenge of modeling the organization and function of biological membranes on a solid support has received considerable attention in recent years, primarily driven by potential applications in biosensor design. Affinity-based biosensors show great promise for extremely sensitive detection of BW agents and toxins. Receptor molecules have been successfully incorporated into phospholipid bilayers supported on sensing platforms. However, a collective body of data detailing a mechanistic understanding of membrane processes involved in receptor-substrate interactions and the competition between localized perturbations and delocalized responses resulting in reorganization of transmembrane protein structure, has yet to be produced. This report describes a systematic procedure to develop detailed correlation between (recognition-induced) protein restructuring and function of a ligand gated ion channel by combining single molecule fluorescence spectroscopy and single channel current recordings. This document is divided into three sections: (1) reported are the thermodynamics and diffusion properties of gramicidin using single molecule fluorescence imaging and (2) preliminary work on the 5HT{sub 3} serotonin receptor. Thirdly, we describe the design and fabrication of a miniaturized platform using the concepts of these two technologies (spectroscopic and single channel electrochemical techniques) for single molecule analysis, with a longer term goal of using the physical and electronic changes caused by a specific molecular recognition event as a transduction pathway in affinity based biosensors for biotoxin detection.
This white paper represents a summary of work intended to lay the foundation for development of a climatological/agent model of climate-induced conflict. The paper combines several loosely-coupled efforts and is the final report for a four-month late-start Laboratory Directed Research and Development (LDRD) project funded by the Advanced Concepts Group (ACG). The project involved contributions by many participants having diverse areas of expertise, with the common goal of learning how to tie together the physical and human causes and consequences of climate change. We performed a review of relevant literature on conflict arising from environmental scarcity. Rather than simply reviewing the previous work, we actively collected data from the referenced sources, reproduced some of the work, and explored alternative models. We used the unfolding crisis in Darfur (western Sudan) as a case study of conflict related to or triggered by climate change, and as an exercise for developing a preliminary concept map. We also outlined a plan for implementing agents in a climate model and defined a logical progression toward the ultimate goal of running both types of models simultaneously in a two-way feedback mode, where the behavior of agents influences the climate and climate change affects the agents. Finally, we offer some ''lessons learned'' in attempting to keep a diverse and geographically dispersed group working together by using Web-based collaborative tools.
The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and on top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also includes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.
Thermal actuators have proven to be a robust actuation method in surface-micromachined MEMS processes. Their higher output force and lower input voltage make them an attractive alternative to more traditional electrostatic actuation methods. A predictive model of thermal actuator behavior has been developed and validated that can be used as a design tool to customize the performance of an actuator to a specific application. This tool has also been used to better understand thermal actuator reliability by comparing the maximum actuator temperature to the measured lifetime. Modeling thermal actuator behavior requires the use of two sequentially coupled models, the first to predict the temperature increase of the actuator due to the applied current and the second to model the mechanical response of the structure due to the increase in temperature. These two models have been developed using Matlab for the thermal response and ANSYS for the structural response. Both models have been shown to agree well with experimental data. In a parallel effort, the reliability and failure mechanisms of thermal actuators have been studied. Their response to electrical overstress and electrostatic discharge has been measured and a study has been performed to determine actuator lifetime at various temperatures and operating conditions. The results from this study have been used to determine a maximum reliable operating temperature that, when used in conjunction with the predictive model, enables us to design in reliability and customize the performance of an actuator at the design stage.
The solution of the governing steady transport equations for momentum, heat and mass transfer in fluids undergoing non-equilibrium chemical reactions can be extremely challenging. The difficulties arise from both the complexity of the nonlinear solution behavior as well as the nonlinear, coupled, non-symmetric nature of the system of algebraic equations that results from spatial discretization of the PDEs. In this paper, we briefly review progress on developing a stabilized finite element ( FE) capability for numerical solution of these challenging problems. The discussion considers the stabilized FE formulation for the low Mach number Navier-Stokes equations with heat and mass transport with non-equilibrium chemical reactions, and the solution methods necessary for detailed analysis of these complex systems. The solution algorithms include robust nonlinear and linear solution schemes, parameter continuation methods, and linear stability analysis techniques. Our discussion considers computational efficiency, scalability, and some implementation issues of the solution methods. Computational results are presented for a CFD benchmark problem as well as for a number of large-scale, 2D and 3D, engineering transport/reaction applications.
This study investigates algebraic multilevel domain decomposition preconditioners of the Schwarz type for solving linear systems associated with Newton-Krylov methods. The key component of the preconditioner is a coarse approximation based on algebraic multigrid ideas to approximate the global behavior of the linear system. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the non-zero block structure of the Jacobian matrix. The scalability of the preconditioner is presented as well as comparisons with a two-level Schwarz preconditioner using a geometric coarse grid operator. These comparisons are obtained on large-scale distributed-memory parallel machines for systems arising from incompressible flow and transport using a stabilized finite element formulation. The results demonstrate the influence of the smoothers and coarse level solvers for a set of 3D example problems. For preconditioners with more than one level, careful attention needs to be given to the balance of robustness and convergence rate for the smoothers and the cost of applying these methods. For properly chosen parameters, the two- and three-level preconditioners are demonstrated to be scalable to 1024 processors.
Least-squares finite-element methods for Darcy flow offer several advantages relative to the mixed-Galerkin method: the avoidance of stability conditions between finite-element spaces, the efficiency of solving symmetric and positive definite systems, and the convenience of using standard, continuous nodal elements for all variables. However, conventional C{sup o} implementations conserve mass only approximately and for this reason they have found limited acceptance in applications where locally conservative velocity fields are of primary interest. In this paper, we show that a properly formulated compatible least-squares method offers the same level of local conservation as a mixed method. The price paid for gaining favourable conservation properties is that one has to give up what is arguably the least important advantage attributed to least-squares finite-element methods: one can no longer use continuous nodal elements for all variables. As an added benefit, compatible least-squares methods inherit the best computational properties of both Galerkin and mixed-Galerkin methods and, in some cases, yield identical results, while offering the advantages of not having to deal with stability conditions and yielding positive definite discrete problems. Numerical results that illustrate our findings are provided.
Acts of terrorism could have a range of broad impacts on an economy, including changes in consumer (or demand) confidence and the ability of productive sectors to respond to changes. As a first step toward a model of terrorism-based impacts, we develop here a model of production and employment that characterizes dynamics in ways useful toward understanding how terrorism-based shocks could propagate through the economy; subsequent models will introduce the role of savings and investment into the economy. We use Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate for validation purposes that a single-firm economy converges to the known monopoly equilibrium price, output, and employment levels, while multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment. However, we find that competition also leads to churn by consumers seeking lower prices, making it difficult for firms to optimize with respect to wages, prices, and employment levels. Thus, competitive firms generate market ''noise'' in the steady state as they search for prices and employment levels that will maximize profits. In the context of this model, not only could terrorism depress overall consumer confidence and economic activity but terrorist acts could also cause normal short-run dynamics to be misinterpreted by consumers as a faltering economy.
The impact of 3D structure on wire array z-pinch dynamics is a topic of current interest, and has been studied by the controlled seeding of wire perturbations. First, Al wires were etched at Sandia, creating 20% radial perturbations with variable axial wavelength. Observations of magnetic bubble formation in the etched regions during experiments on the MAGPIE accelerator are discussed and compared to 3D MHD modeling. Second, thin NaF coatings of 1 mm axial extent were deposited on Al wires and fielded on the Zebra accelerator. Little or no axial transport of the NaF spectroscopic dopant was observed in spatially resolved K-shell spectra, which places constraints on particle diffusivity in dense z-pinch plasmas. Finally, technology development for seeding perturbations is discussed.
This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.
In the search for ''good'' parallel programming environments for Sandia's current and future parallel architectures, they revisit a long-standing open question. Can the PRAM parallel algorithms designed by theoretical computer scientists over the last two decades be implemented efficiently? This open question has co-existed with ongoing efforts in the HPC community to develop practical parallel programming models that can simultaneously provide ease of use, expressiveness, performance, and scalability. Unfortunately, no single model has met all these competing requirements. Here they propose a parallel programming environment, PRAM C, to bridge the gap between theory and practice. This is an attempt to provide an affirmative answer to the PRAM question, and to satisfy these competing practical requirements. This environment consists of a new thin runtime layer and an ANSI C extension. The C extension has two control constructs and one additional data type concept, ''shared''. This C extension should enable easy translation from PRAM algorithms to real parallel programs, much like the translation from sequential algorithms to C programs. The thin runtime layer bundles fine-grained communication requests into coarse-grained communication to be served by message-passing. Although the PRAM represents SIMD-style fine-grained parallelism, a stand-alone PRAM C environment can support both fine-grained and coarse-grained parallel programming in either a MIMD or SPMD style, interoperate with existing MPI libraries, and use existing hardware. The PRAM C model can also be integrated easily with existing models. Unlike related efforts proposing innovative hardware with the goal to realize the PRAM, ours can be a pure software solution with the purpose to provide a practical programming environment for existing parallel machines; it also has the potential to perform well on future parallel architectures.
Genetic programming (GP) has proved to be a highly versatile and useful tool for identifying relationships in data for which a more precise theoretical construct is unavailable. In this project, we use a GP search to develop trading strategies for agent based economic models. These strategies use stock prices and technical indicators, such as the moving average convergence/divergence and various exponentially weighted moving averages, to generate buy and sell signals. We analyze the effect of complexity constraints on the strategies as well as the relative performance of various indicators. We also present innovations in the classical genetic programming algorithm that appear to improve convergence for this problem. Technical strategies developed by our GP algorithm can be used to control the behavior of agents in economic simulation packages, such as ASPEN-D, adding variety to the current market fundamentals approach. The exploitation of arbitrage opportunities by technical analysts may help increase the efficiency of the simulated stock market, as it does in the real world. By improving the behavior of simulated stock markets, we can better estimate the effects of shocks to the economy due to terrorism or natural disasters.
In this paper we present an analysis of a new configuration for achieving spin stabilized magnetic levitation. In the classical configuration, the rotor spins about a vertical axis; and the spin stabilizes the lateral instability of the top in the magnetic field. In this new configuration the rotor spins about a horizontal axis; and the spin stabilizes the axial instability of the top in the magnetic field.
ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package or to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.
This paper presents solution verification studies applicable to a class of problems involving wave propagation, frictional contact, geometrical complexity, and localized incompressibility. The studies are in support of a validation exercise of a phenomenological screw failure model. The numerical simulations are performed using a fully explicit transient dynamics finite element code, employing both standard four-node tetrahedral and eight-node mean quadrature hexahedral elements. It is demonstrated that verifying the accuracy of the simulation involves not only consideration of the mesh discretization error, but also the effect of the hourglass control and the contact enforcement. In particular, the proper amount of hourglass control and the behavior of the contact search and enforcement algorithms depend greatly on the mesh resolution. We carry out the solution verification exercise using mesh refinement studies and describe our systematic approach to handling the complicating issues. It is shown that hourglassing and contact must both be carefully monitored as the mesh is refined, and it is often necessary to make adjustments to the hourglass and contact user input parameters to accommodate finer meshes. We introduce in this paper the hourglass energy, which is used as an 'error indicator' for the hourglass control. If the hourglass energy does not tend to zero with mesh refinement, then an hourglass control parameter is changed and the calculation is repeated.
An important challenge encountered during post-processing of finite element analyses is the visualizing of three-dimensional fields of real-valued second-order tensors. Namely, as finite element meshes become more complex and detailed, evaluation and presentation of the principal stresses becomes correspondingly problematic. In this paper, we describe techniques used to visualize simulations of perturbed in-situ stress fields associated with hypothetical salt bodies in the Gulf of Mexico. We present an adaptation of the Mohr diagram, a graphical paper and pencil method used by the material mechanics community for estimating coordinate transformations for stress tensors, as a new tensor glyph for dynamically exploring tensor variables within three-dimensional finite element models. This interactive glyph can be used as either a probe or a filter through brushing and linking.
The purpose of the present work is to increase our understanding of which properties of geomaterials most influence the penetration process with a goal of improving our predictive ability. Two primary approaches were followed: development of a realistic, constitutive model for geomaterials and designing an experimental approach to study penetration from the target's point of view. A realistic constitutive model, with parameters based on measurable properties, can be used for sensitivity analysis to determine the properties that are most important in influencing the penetration process. An immense literature exists that is devoted to the problem of predicting penetration into geomaterials or similar man-made materials such as concrete. Various formulations have been developed that use an analytic or more commonly, numerical, solution for the spherical or cylindrical cavity expansion as a sort of Green's function to establish the forces acting on a penetrator. This approach has had considerable success in modeling the behavior of penetrators, both as to path and depth of penetration. However the approach is not well adapted to the problem of understanding what is happening to the material being penetrated. Without a picture of the stress and strain state imposed on the highly deformed target material, it is not easy to determine what properties of the target are important in influencing the penetration process. We developed an experimental arrangement that allows greater control of the deformation than is possible in actual penetrator tests, yet approximates the deformation processes imposed by a penetrator. Using explosive line charges placed in a central borehole, we loaded cylindrical specimens in a manner equivalent to an increment of penetration, allowing the measurement of the associated strains and accelerations and the retrieval of specimens from the more-or-less intact cylinder. Results show clearly that the deformation zone is highly concentrated near the borehole, with almost no damage occurring beyond 1/2 a borehole diameter. This implies penetration is not strongly influenced by anything but the material within a diameter or so of the penetration. For penetrator tests, target size should not matter strongly once target diameters exceed some small multiple of the penetrator diameter. Penetration into jointed rock should not be much affected unless a discontinuity is within a similar range. Accelerations measured at several points along a radius from the borehole are consistent with highly-concentrated damage and energy absorption; At the borehole wall, accelerations were an order of magnitude higher than at 1/2 a diameter, but at the outer surface, 8 diameters away, accelerations were as expected for propagation through an elastic medium. Accelerations measured at the outer surface of the cylinders increased significantly with cure time for the concrete. As strength increased, less damage was observed near the explosively-driven borehole wall consistent with the lower energy absorption expected and observed for stronger concrete. As it is the energy absorbing properties of a target that ultimately stop a penetrator, we believe this may point the way to a more readily determined equivalent of the S number.
Biosecurity must be implemented without impeding biomedical and bioscience research. Existing security literature and regulatory requirements do not present a comprehensive approach or clear model for biosecurity, nor do they wholly recognize the operational issues within laboratory environments. To help address these issues, the concept of Biosecurity Levels should be developed. Biosecurity Levels would have increasing levels of security protections depending on the attractiveness of the pathogens to adversaries. Pathogens and toxins would be placed in a Biosecurity Level based on their security risk. Specifically, the security risk would be a function of an agent's weaponization potential and consequences of use. To demonstrate the concept, examples of security risk assessments for several human, animal, and plant pathogens will be presented. Higher security than that currently mandated by federal regulations would be applied for those very few agents that represent true weapons threats and lower levels for the remainder.
By means of coupled-cluster theory, molecular properties can be computed with an accuracy often exceeding that of experiment. The high-degree polynomial scaling of the coupled-cluster method, however, remains a major obstacle in the accurate theoretical treatment of mainstream chemical problems, despite tremendous progress in computer architectures. Although it has long been recognized that this super-linear scaling is non-physical, the development of efficient reduced-scaling algorithms for massively parallel computers has not been realized. We here present a locally correlated, reduced-scaling, massively parallel coupled-cluster algorithm. A sparse data representation for handling distributed, sparse multidimensional arrays has been implemented along with a set of generalized contraction routines capable of handling such arrays. The parallel implementation entails a coarse-grained parallelization, reducing interprocessor communication and distributing the largest data arrays but replicating as many arrays as possible without introducing memory bottlenecks. The performance of the algorithm is illustrated by several series of runs for glycine chains using a Linux cluster with an InfiniBand interconnect.