Publications

Results 9651–9700 of 9,998

Search results

Jump to search filters

Stability of biological networks as represented in Random Boolean Nets

Slepoy, Alexander S.; Thompson, Marshall A.

We explore stability of Random Boolean Networks as a model of biological interaction networks. We introduce surface-to-volume ratio as a measure of stability of the network. Surface is defined as the set of states within a basin of attraction that maps outside the basin by a bit-flip operation. Volume is defined as the total number of states in the basin. We report development of an object-oriented Boolean network analysis code (Attract) to investigate the structure of stable vs. unstable networks. We find two distinct types of stable networks. The first type is the nearly trivial stable network with a few basins of attraction. The second type contains many basins. We conclude that second type stable networks are extremely rare.

More Details

Optimal neuronal tuning for finite stimulus spaces

Proposed for publication in Neural computation.

Brown, William M.; Backer, Alejandro B.

The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.

More Details

Dynamic context discrimination : psychological evidence for the Sandia Cognitive Framework

Speed, Ann S.

Human behavior is a function of an iterative interaction between the stimulus environment and past experience. It is not simply a matter of the current stimulus environment activating the appropriate experience or rule from memory (e.g., if it is dark and I hear a strange noise outside, then I turn on the outside lights and investigate). Rather, it is a dynamic process that takes into account not only things one would generally do in a given situation, but things that have recently become known (e.g., there have recently been coyotes seen in the area and one is known to be rabid), as well as other immediate environmental characteristics (e.g., it is snowing outside, I know my dog is outside, I know the police are already outside, etc.). All of these factors combine to inform me of the most appropriate behavior for the situation. If it were the case that humans had a rule for every possible contingency, the amount of storage that would be required to enable us to fluidly deal with most situations we encounter would rapidly become biologically untenable. We can all deal with contingencies like the one above with fairly little effort, but if it isn't based on rules, what is it based on? The assertion of the Cognitive Systems program at Sandia for the past 5 years is that at the heart of this ability to effectively navigate the world is an ability to discriminate between different contexts (i.e., Dynamic Context Discrimination, or DCD). While this assertion in and of itself might not seem earthshaking, it is compelling that this ability and its components show up in a wide variety of paradigms across different subdisciplines in psychology. We begin by outlining, at a high functional level, the basic ideas of DCD. We then provide evidence from several different literatures and paradigms that support our assertion that DCD is a core aspect of cognitive functioning. Finally, we discuss DCD and the computational model that we have developed as an instantiation of DCD in more detail. Before commencing with our overview of DCD, we should note that DCD is not necessarily a theory in the classic sense. Rather, it is a description of cognitive functioning that seeks to unify highly similar findings across a wide variety of literatures. Further, we believe that such convergence warrants a central place in efforts to computationally emulate human cognition. That is, DCD is a general principle of cognition. It is also important to note that while we are drawing parallels across many literatures, these are functional parallels and are not necessarily structural ones. That is, we are not saying that the same neural pathways are involved in these phenomena. We are only saying that the different neural pathways that are responsible for the appearance of these various phenomena follow the same functional rules - the mechanisms are the same even if the physical parts are distinct. Furthermore, DCD is not a causal mechanism - it is an emergent property of the way the brain is constructed. DCD is the result of neurophysiology (cf. John, 2002, 2003). Finally, it is important to note that we are not proposing a generic learning mechanism such that one biological algorithm can account for all situation interpretation. Rather, we are pointing out that there are strikingly similar empirical results across a wide variety of disciplines that can be understood, in part, by similar cognitive processes. It is entirely possible, even assumed in some cases (i.e., primary language acquisition) that these more generic cognitive processes are complemented and constrained by various limits which may or may not be biological in nature (cf. Bates & Elman, 1996; Elman, in press).

More Details

Analysis and control of distributed cooperative systems

Feddema, John T.; Schoenwald, David A.; Parker, Eric P.; Wagner, John S.

As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.

More Details

Sensor placement in municipal water networks

Proposed for publication in the Journal of Water Resources Planning and Management.

Hart, William E.; Phillips, Cynthia A.; Berry, Jonathan W.; Watson, Jean-Paul W.

We present a model for optimizing the placement of sensors in municipal water networks to detect maliciously injected contaminants. An optimal sensor configuration minimizes the expected fraction of the population at risk. We formulate this problem as a mixed-integer program, which can be solved with generally available solvers. We find optimal sensor placements for three test networks with synthetic risk and population data. Our experiments illustrate that this formulation can be solved relatively quickly and that the predicted sensor configuration is relatively insensitive to uncertainties in the data used for prediction.

More Details

Unified parallel C and the computing needs of Sandia National Laboratories

Wen, Zhaofang W.

As Sandia looks toward petaflops computing and other advanced architectures, it is necessary to provide a programming environment that can exploit this additional computing power while supporting reasonable development time for applications. Thus, they evaluate the Partitioned Global Address Space (PGAS) programming model as implemented in Unified Parallel C (UPC) for its applicability. They report on their experiences in implementing sorting and minimum spanning tree algorithms on a test system, a Cray T3e, with UPC support. They describe several macros that could serve as language extensions and several building-block operations that could serve as a foundation for a PGAS programming library. They analyze the limitations of the UPC implementation available on the test system, and suggest improvements necessary before UPC can be used in a production environment.

More Details

The Sandia GeoModel : theory and user's guide

Fossum, A.F.; Brannon, Rebecca M.

The mathematical and physical foundations and domain of applicability of Sandia's GeoModel are presented along with descriptions of the source code and user instructions. The model is designed to be used in conventional finite element architectures, and (to date) it has been installed in five host codes without requiring customizing the model subroutines for any of these different installations. Although developed for application to geological materials, the GeoModel actually applies to a much broader class of materials, including rock-like engineered materials (such as concretes and ceramics) and even to metals when simplified parameters are used. Nonlinear elasticity is supported through an empirically fitted function that has been found to be well-suited to a wide variety of materials. Fundamentally, the GeoModel is a generalized plasticity model. As such, it includes a yield surface, but the term 'yield' is generalized to include any form of inelastic material response including microcrack growth and pore collapse. The geomodel supports deformation-induced anisotropy in a limited capacity through kinematic hardening (in which the initially isotropic yield surface is permitted to translate in deviatoric stress space to model Bauschinger effects). Aside from kinematic hardening, however, the governing equations are otherwise isotropic. The GeoModel is a genuine unification and generalization of simpler models. The GeoModel can employ up to 40 material input and control parameters in the rare case when all features are used. Simpler idealizations (such as linear elasticity, or Von Mises yield, or Mohr-Coulomb failure) can be replicated by simply using fewer parameters. For high-strain-rate applications, the GeoModel supports rate dependence through an overstress model.

More Details

Computational Fluid Dynamic simulations of pipe elbow flow

Homicz, Gregory F.

One problem facing today's nuclear power industry is flow-accelerated corrosion and erosion in pipe elbows. The Korean Atomic Energy Research Institute (KAERI) is performing experiments in their Flow-Accelerated Corrosion (FAC) test loop to better characterize these phenomena, and develop advanced sensor technologies for the condition monitoring of critical elbows on a continuous basis. In parallel with these experiments, Sandia National Laboratories is performing Computational Fluid Dynamic (CFD) simulations of the flow in one elbow of the FAC test loop. The simulations are being performed using the FLUENT commercial software developed and marketed by Fluent, Inc. The model geometry and mesh were created using the GAMBIT software, also from Fluent, Inc. This report documents the results of the simulations that have been made to date; baseline results employing the RNG k-e turbulence model are presented. The predicted value for the diametrical pressure coefficient is in reasonably good agreement with published correlations. Plots of the velocities, pressure field, wall shear stress, and turbulent kinetic energy adjacent to the wall are shown within the elbow section. Somewhat to our surprise, these indicate that the maximum values of both wall shear stress and turbulent kinetic energy occur near the elbow entrance, on the inner radius of the bend. Additional simulations were performed for the same conditions, but with the RNG k-e model replaced by either the standard k-{var_epsilon}, or the realizable k-{var_epsilon} turbulence model. The predictions using the standard k-{var_epsilon} model are quite similar to those obtained in the baseline simulation. However, with the realizable k-{var_epsilon} model, more significant differences are evident. The maximums in both wall shear stress and turbulent kinetic energy now appear on the outer radius, near the elbow exit, and are {approx}11% and 14% greater, respectively, than those predicted in the baseline calculation; secondary maxima in both quantities still occur near the elbow entrance on the inner radius. Which set of results better reflects reality must await experimental corroboration. Additional calculations demonstrate that whether or not FLUENT's radial equilibrium pressure distribution option is used in the PRESSURE OUTLET boundary condition has no significant impact on the flowfield near the elbow. Simulations performed with and without the chemical sensor and associated support bracket that were present in the experiments demonstrate that the latter have a negligible influence on the flow in the vicinity of the elbow. The fact that the maxima in wall shear stress and turbulent kinetic energy occur on the inner radius is therefore not an artifact of having introduced the sensor into the flow.

More Details

Peridynamic modeling of membranes and fibers

Proposed for publication in Peridynamic Modeling of Membranes and Fibers.

Silling, Stewart A.

The peridynamic theory of continuum mechanics allows damage, fracture, and long-range forces to be treated as natural components of the deformation of a material. In this paper, the peridynamic approach is applied to small thickness two- and one-dimensional structures. For membranes, a constitutive model is described appropriate for rubbery sheets that can form cracks. This model is used to perform numerical simulations of the stretching and dynamic tearing of membranes. A similar approach is applied to one-dimensional string like structures that undergrow stretching, bending, and failure. Long-range forces similar to van der Waals interactions at the nanoscale influence the equilibrium configurations of these structures, how they deform, and possibly self-assembly.

More Details

Approach and development strategy for an agent-based model of economic confidence

Sprigg, James A.; Jorgensen, Craig R.; Pryor, Richard J.

We are extending the existing features of Aspen, a powerful economic modeling tool, and introducing new features to simulate the role of confidence in economic activity. The new model is built from a collection of autonomous agents that represent households, firms, and other relevant entities like financial exchanges and governmental authorities. We simultaneously model several interrelated markets, including those for labor, products, stocks, and bonds. We also model economic tradeoffs, such as decisions of households and firms regarding spending, savings, and investment. In this paper, we review some of the basic principles and model components and describe our approach and development strategy for emulating consumer, investor, and business confidence. The model of confidence is explored within the context of economic disruptions, such as those resulting from disasters or terrorist events.

More Details

On the Held-Karp relaxation for the asymmetric and symmetric traveling salesman problems

Mathematical Programming

Carr, Robert D.

A long-standing conjecture in combinatorial optimization says that the integrality gap of the famous Held-Karp relaxation of the metric STSP (Symmetric Traveling Salesman Problem) is precisely 4/3. In this paper, we show that a slight strengthening of this conjecture implies a tight 4/3 integrality gap for a linear programming relaxation of the metric ATSP (Asymmetric Traveling Salesman Problem). Our main tools are a new characterization of the integrality gap for linear objective functions over polyhedra, and the isolation of "hard-to-round" solutions of the relaxations. © Springer-Verlag 2004.

More Details

A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version

Swiler, Laura P.; Wyss, Gregory D.

This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a library that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.

More Details

Validating DOE's Office of Science "capability" computing needs

Leland, Robert; Camp, William

A study was undertaken to validate the 'capability' computing needs of DOE's Office of Science. More than seventy members of the community provided information about algorithmic scaling laws, so that the impact of having access to Petascale capability computers could be assessed. We have concluded that the Office of Science community has described credible needs for Petascale capability computing.

More Details

Acceleration of the Generialized Global Basis (GGB) method for nonlinear problems

Proposed for publication in Journal of Computational Physics.

Tuminaro, Raymond S.; Shadid, John N.

Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Both methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.

More Details

Solving elliptic finite element systems in near-linear time with support preconditioners

Proposed for publication in SIAM Journal of Matrix Analysis.

Boman, Erik G.; Hendrickson, Bruce A.

We consider linear systems arising from the use of the finite element method for solving scalar linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and in particular it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the support number bounds, which control the number of iterations in the preconditioned iterative solver, depend on mesh quality measures but not on the problem size or shape of the domain.

More Details

An analytically solvable eigenvalue problem for the linear elasticity equations

Romero, L.A.

Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.

More Details

Will Moores law be sufficient?

DeBenedictis, Erik

It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPS (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.

More Details

A comparison of inexact newton and coordinate descent mesh optimization techniques

Knupp, Patrick K.

We compare inexact Newton and coordinate descent methods for optimizing the quality of a mesh by repositioning the vertices, where quality is measured by the harmonic mean of the mean-ratio metric. The effects of problem size, element size heterogeneity, and various vertex displacement schemes on the performance of these algorithms are assessed for a series of tetrahedral meshes.

More Details

Advanced parallel programming models research and development opportunities

Brightwell, Ronald B.; Wen, Zhaofang W.

There is currently a large research and development effort within the high-performance computing community on advanced parallel programming models. This research can potentially have an impact on parallel applications, system software, and computing architectures in the next several years. Given Sandia's expertise and unique perspective in these areas, particularly on very large-scale systems, there are many areas in which Sandia can contribute to this effort. This technical report provides a survey of past and present parallel programming model research projects and provides a detailed description of the Partitioned Global Address Space (PGAS) programming model. The PGAS model may offer several improvements over the traditional distributed memory message passing model, which is the dominant model currently being used at Sandia. This technical report discusses these potential benefits and outlines specific areas where Sandia's expertise could contribute to current research activities. In particular, we describe several projects in the areas of high-performance networking, operating systems and parallel runtime systems, compilers, application development, and performance evaluation.

More Details

Communication-aware processor allocation for supercomputers

Leung, Vitus J.; Phillips, Cynthia A.

We give processor-allocation algorithms for grid architectures, where the objective is to select processors from a set of available processors to minimize the average number of communication hops. The associated clustering problem is as follows: Given n points in R{sup d}, find a size-k subset with minimum average pairwise L{sub 1} distance.We present a natural approximation algorithm and show that it is a 7/4-approximation for 2D grids. In d dimensions, the approximation guarantee is 2 - 1/2d, which is tight. We also give a polynomial-time approximation scheme (PTAS) for constant dimension d and report on experimental results.

More Details

AztecOO user guide

Heroux, Michael A.

The Trilinos{trademark} Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. AztecOO{trademark} is a package within Trilinos that enables the use of the Aztec solver library [19] with Epetra{trademark} [13] objects. AztecOO provides access to Aztec preconditioners and solvers by implementing the Aztec 'matrix-free' interface using Epetra. While Aztec is written in C and procedure-oriented, AztecOO is written in C++ and is object-oriented. In addition to providing access to Aztec capabilities, AztecOO also provides some signficant new functionality. In particular it provides an extensible status testing capability that allows expression of sophisticated stopping criteria as is needed in production use of iterative solvers. AztecOO also provides mechanisms for using Ifpack [2], ML [20] and AztecOO itself as preconditioners.

More Details

Sundance 2.0 tutorial

Long, Kevin R.

Sundance is a system of software components that allows construction of an entire parallel simulator and its derivatives using a high-level symbolic language. With this high-level problem description, it is possible to specify a weak formulation of a PDE and its discretization method in a small amount of user-level code; furthermore, because derivatives are easily available, a simulation in Sundance is immediately suitable for accelerated PDE-constrained optimization algorithms. This paper is a tutorial for setting up and solving linear and nonlinear PDEs in Sundance. With several simple examples, we show how to set up mesh objects, geometric regions for BC application, the weak form of the PDE, and boundary conditions. Each example then illustrates use of an appropriate solver and solution visualization.

More Details

Xyce parallel electronic simulator design : mathematical formulation, version 2.0

Keiter, Eric R.; Hutchinson, Scott A.; Hoekstra, Robert J.; Russo, Thomas V.

This document is intended to contain a detailed description of the mathematical formulation of Xyce, a massively parallel SPICE-style circuit simulator developed at Sandia National Laboratories. The target audience of this document are people in the role of 'service provider'. An example of such a person would be a linear solver expert who is spending a small fraction of his time developing solver algorithms for Xyce. Such a person probably is not an expert in circuit simulation, and would benefit from an description of the equations solved by Xyce. In this document, modified nodal analysis (MNA) is described in detail, with a number of examples. Issues that are unique to circuit simulation, such as voltage limiting, are also described in detail.

More Details

Final report on grand challenge LDRD project : a revolution in lighting : building the science and technology base for ultra-efficient solid-state lighting

Simmons, J.A.; Fischer, Arthur J.; Crawford, Mary H.; Abrams, B.L.; Biefeld, Robert M.; Koleske, Daniel K.; Allerman, A.A.; Figiel, J.J.; Creighton, J.R.; Coltrin, Michael E.; Tsao, Jeffrey Y.; Mitchell, Christine C.; Kerley, Thomas M.; Wang, George T.; Bogart, Katherine B.; Seager, Carleton H.; Campbell, Jonathan C.; Follstaedt, D.M.; Norman, Adam K.; Kurtz, S.R.; Wright, Alan F.; Myers, S.M.; Missert, Nancy A.; Copeland, Robert G.; Provencio, P.N.; Wilcoxon, Jess P.; Hadley, G.R.; Wendt, J.R.; Kaplar, Robert K.; Shul, Randy J.; Rohwer, Lauren E.; Tallant, David T.; Simpson, Regina L.; Moffat, Harry K.; Salinger, Andrew G.; Pawlowski, Roger P.; Emerson, John A.; Thoma, Steven T.; Cole, Phillip J.; Boyack, Kevin W.; Garcia, Marie L.; Allen, Mark S.; Burdick, Brent B.; Rahal, Nabeel R.; Monson, Mary A.; Chow, Weng W.; Waldrip, Karen E.

This SAND report is the final report on Sandia's Grand Challenge LDRD Project 27328, 'A Revolution in Lighting -- Building the Science and Technology Base for Ultra-Efficient Solid-state Lighting.' This project, which for brevity we refer to as the SSL GCLDRD, is considered one of Sandia's most successful GCLDRDs. As a result, this report reviews not only technical highlights, but also the genesis of the idea for Solid-state Lighting (SSL), the initiation of the SSL GCLDRD, and the goals, scope, success metrics, and evolution of the SSL GCLDRD over the course of its life. One way in which the SSL GCLDRD was different from other GCLDRDs was that it coincided with a larger effort by the SSL community - primarily industrial companies investing in SSL, but also universities, trade organizations, and other Department of Energy (DOE) national laboratories - to support a national initiative in SSL R&D. Sandia was a major player in publicizing the tremendous energy savings potential of SSL, and in helping to develop, unify and support community consensus for such an initiative. Hence, our activities in this area, discussed in Chapter 6, were substantial: white papers; SSL technology workshops and roadmaps; support for the Optoelectronics Industry Development Association (OIDA), DOE and Senator Bingaman's office; extensive public relations and media activities; and a worldwide SSL community website. Many science and technology advances and breakthroughs were also enabled under this GCLDRD, resulting in: 55 publications; 124 presentations; 10 book chapters and reports; 5 U.S. patent applications including 1 already issued; and 14 patent disclosures not yet applied for. Twenty-six invited talks were given, at prestigious venues such as the American Physical Society Meeting, the Materials Research Society Meeting, the AVS International Symposium, and the Electrochemical Society Meeting. This report contains a summary of these science and technology advances and breakthroughs, with Chapters 1-5 devoted to the five technical task areas: 1 Fundamental Materials Physics; 2 111-Nitride Growth Chemistry and Substrate Physics; 3 111-Nitride MOCVD Reactor Design and In-Situ Monitoring; 4 Advanced Light-Emitting Devices; and 5 Phosphors and Encapsulants. Chapter 7 (Appendix A) contains a listing of publications, presentations, and patents. Finally, the SSL GCLDRD resulted in numerous actual and pending follow-on programs for Sandia, including multiple grants from DOE and the Defense Advanced Research Projects Agency (DARPA), and Cooperative Research and Development Agreements (CRADAs) with SSL companies. Many of these follow-on programs arose out of contacts developed through our External Advisory Committee (EAC). In h s and other ways, the EAC played a very important role. Chapter 8 (Appendix B) contains the full (unedited) text of the EAC reviews that were held periodically during the course of the project.

More Details
Results 9651–9700 of 9,998
Results 9651–9700 of 9,998