The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.
A persistent challenge in simulating damage of natural geological materials, as well as rock-like engineered materials, is the development of efficient and accurate constitutive models. The common feature for these brittle and quasi-brittle materials are the presence of flaws such as porosity and network of microcracks. The desired models need to be able to predict the material responses over a wide range of porosities and strain rate. Kayenta (formerly called the Sandia GeoModel) is a unified general-purpose constitutive model that strikes a balance between first-principles micromechanics and phenomenological or semi-empirical modeling strategies. However, despite its sophistication and ability to reduce to several classical plasticity theories, Kayenta is incapable of modeling deformation of ductile materials in which deformation is dominated by dislocation generation and movement which can lead to significant heating. This stems from Kayenta's roots as a geological model, where heating due to inelastic deformation is often neglected or presumed to be incorporated implicitly through the elastic moduli. The sophistication of Kayenta and its large set of extensive features, however, make Kayenta an attractive candidate model to which thermal effects can be added. This report outlines the initial work in doing just that, extending the capabilities of Kayenta to include deformation of ductile materials, for which thermal effects cannot be neglected. Thermal effects are included based on an assumption of adiabatic loading by computing the bulk and thermal responses of the material with the Kerley Mie-Grueneisen equation of state and adjusting the yield surface according to the updated thermal state. This new version of Kayenta, referred to as Thermo-Kayenta throughout this report, is capable of reducing to classical Johnson-Cook plasticity in special case single element simulations and has been used to obtain reasonable results in more complicated Taylor impact simulations in LS-Dyna. Despite these successes, however, Thermo-Kayenta requires additional refinement for it to be consistent in the thermodynamic sense and for it to be considered superior to other, more mature thermoplastic models. The initial thermal development, results, and required refinements are all detailed in the following report.
U.S. energy needs - minimizing climate change, mining and extraction technologies, safe waste disposal - require the ability to simulate, model, and predict the behavior of subsurface systems. They propose development of a coupled thermal, hydrological, mechanical, chemistry (THMC) modeling capability for massively parallel applications that can address these critical needs. The goal and expected outcome of this research is a state-of-the-art, extensible, simulation capability, based upon SIERRA Mechanics, to address multiphase, multicomponent reactive transport coupled to nonlinear geomechanics in heterogeneous (geologic) porous materials. The THMC code provides a platform for integrating research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture.
Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.
Sandia National Laboratories (SNL) Technical Area V (TA-V) has provided unique nuclear experimental environments for decades. The technologies tested in TA-V facilities have furthered the United States Nuclear Weapons program and has contributed to the national energy and homeland security mission. The importance of TA-V working efficiently to produce an attractive and effective platform for experiments should not be underestimated. Throughout its brief history, TA-V has evolved to address multiple and diverse sets of requirements. These requirements evolved over many years; however, the requirements had not been managed nor communicated comprehensively or effectively. A series of programmatic findings over several years of external audits was evidence of this downfall. Today, these same requirements flow down through a new TA-V management system that produces consistently applied and reproducible approaches to work practices. In 2008, the TA-V department managers assessed the state of TA-V services and work activities to understand how to improve customer interfaces, stakeholders perceptions, and workforce efficiencies. The TA-V management team initiated the TA-V Transformation Project after they deemed the pre-June 2008 operational model to be ineffective in managing work and in providing integrated, continuous improvement to TA-V processes. This report summarizes the TA-V Transformation Project goals, activities, and accomplishments.
The Polynomial chaos expansion provides a means of representing any L2 random variable as a sum of polynomials that are orthogonal with respect to a chosen measure. Examples include the Hermite polynomials with Gaussian measure on the real line and the Legendre polynomials with uniform measure on an interval. Polynomial chaos can be used to reformulate an uncertain ODE system, using Galerkin projection, as a new, higher-dimensional, deterministic ODE system which describes the evolution of each mode of the polynomial chaos expansion. It is of interest to explore the eigenstructure of the original and reformulated ODE systems by studying the eigenvalues and eigenvectors of their Jacobians. In this talk, we study the distribution of the eigenvalues of the two Jacobians. We outline in general the location of the eigenvalues of the new system with respect to those of the original system, and examine the effect of expansion order on this distribution.
2010 NPR and President Obama's 2009 Prague Speech highlighted two key objectives with an inherent underlying tension: (1) Moving towards a world free of nuclear weapons; and (2) Sustaining a safe, secure, and effective nuclear arsenal. Objective 1 depends, inter alia, upon reductions in stockpiles at home and abroad and maintaining stability. Objective 2 depends upon needed investments in modernization and life extension. Objectives being pursued predominantly in parallel by largely separate communities.
Directory services are used by almost every enterprise computing environment to provide data concerning users, computers, contacts, and other objects. Virtual directories are components that provide directory services in a highly customized manner. Unfortunately, though the use of virtual directory services are widespread, an analysis of risks posed by their unique position and architecture has not been completed. We present a detailed analysis of six attacks to virtual directory services, including steps for detection and prevention. We also describe various categories of attack risks, and discuss what is necessary to launch an attack on virtual directories. Finally, we present a framework to use in analyzing risks to individual enterprise computing virtual directory instances. We show how to apply this framework to an example implementation, and discuss the benefits of doing so.
The present paper is the second in a series published at I/ITSEC that seeks to explain the efficacy of multi-role experiential learning employed to create engaging game-based training methods transitioned to the U.S. Army, U.S. Army Special Forces, Civil Affairs, and Psychological Operations teams. The first publication (I/ITSEC 2009) summarized findings from a quantitative study that investigated experiential learning in the multi-player, PC-based game module transitioned to PEO-STRI, DARWARS Ambush! NK (non-kinetic). The 2009 publication reported that participants of multi-role (Player and Reflective Observer/Evaluator) game-based training reported statistically significant learning and engagement. Additionally when the means of the two groups (Player and Reflective Observer/Evaluator) were compared, they were not statistically significantly different from each other. That is to say that both playing as well as observing/evaluating were engaging learning modalities. The Observer/Evaluator role was designed to provide an opportunity for real-time reflection and meta-cognitive learning during game play. Results indicated that this role was an engaging way to learn about communication, that participants learned something about cultural awareness, and that the skills they learned were helpful in problem solving and decision-making.
The present paper seeks to continue to understand what and how users of non-kinetic game-based missions learn by revisiting the 2009 quantitative study with further investigation such as stochastic player performance analysis using latent semantic analyses and graph visualizations. The results are applicable to First-Person game-based learning systems designed to enhance trainee intercultural communication, interpersonal skills, and adaptive thinking. In the full paper, we discuss results obtained from data collected from 78 research participants of diverse backgrounds who trained by engaging in tasks directly, as well as observing and evaluating peer performance in real-time. The goal is two-fold. One is to quantify and visualize detailed player performance data coming from game play transcription to give further understanding to the results in the 2009 I/ITSEC paper. The second is to develop a set of technologies from this quantification and visualization approach into a generalized application tool to be used to aid in future games’ development of player/learner models and game adaptation algorithms.
Specifically, this paper addresses questions such as, “Are there significant differences in one's experience when an experiential learning task is observed first, and then performed by the same individual?” “Are there significant differences among groups participating in different roles in non-kinetic engagement training, especially when one role requires more active participation that the other?” “What is the impact of behavior modeling on learning in games?” In answering these questions the present paper reinforces the 2009 empirical study conclusion that contrary to current trends in military game development, experiential learning is enhanced by innovative training approaches designed to facilitate trainee mastery of reflective observation and abstract conceptualization as much as performance-based skills.
For oxy-combustion with flue gas recirculation, as is commonly employed, it is recognized that elevated CO{sub 2} levels affect radiant transport, the heat capacity of the gas, and other gas transport properties. A topic of widespread speculation has concerned the effect of the CO{sub 2} gasification reaction with coal char on the char burning rate. To give clarity to the likely impact of this reaction on the oxy-fuel combustion of pulverized coal char, the Surface Kinetics in Porous Particles (SKIPPY) code was employed for a range of potential CO{sub 2} reaction rates for a high-volatile bituminous coal char particle (130 {micro}m diameter) reacting in several O{sub 2} concentration environments. The effects of boundary layer chemistry are also examined in this analysis. Under oxygen-enriched conditions, boundary layer reactions (converting CO to CO{sub 2}, with concomitant heat release) are shown to increase the char particle temperature and burning rate, while decreasing the O{sub 2} concentration at the particle surface. The CO{sub 2} gasification reaction acts to reduce the char particle temperature (because of the reaction endothermicity) and thereby reduces the rate of char oxidation. Interestingly, the presence of the CO{sub 2} gasification reaction increases the char conversion rate for combustion at low O{sub 2} concentrations, but decreases char conversion for combustion at high O{sub 2} concentrations. These calculations give new insight into the complexity of the effects from the CO{sub 2} gasification reaction and should help improve the understanding of experimentally measured oxy-fuel char combustion and burnout trends in the literature.
In an aim to develop photo-responsive composites, the UV photo-reduction of aqueous titanium oxide nanoparticle-graphene oxide (TiO{sub 2}-GO) dispersions (Lambert et al. J Phys. Chem. 2010 113 (46), 19812-19823) was undertaken. Photo-reduction led to the formation of a black precipitate as well as a soluble portion, comprised of titanium oxide nanoparticle-reduced graphene oxide (TiO{sub 2}-RGO). When allowed to slowly evaporate, self assembled titanium oxide nanoparticle-graphene oxide (SA-TiO{sub 2}-RGO) films formed at the air-liquid interface of the solution. The thickness of SARGO-TiO{sub 2} films range from {approx}30-100 nm when deposited on substrates, and appear to be comprised of a mosaic assembly of graphene nanosheets and TiO{sub 2}, as observed by scanning electron microscopy. Raman spectroscopy and X-ray photoelectron spectroscopy indicate that the graphene oxide is only partially reduced in the SA-TiO{sub 2}-RGO material. These films were also deposited onto inter-digitated electrodes and their photo-responsive behavior was examined. UV-exposure lead to a {approx} 200 kOhm decrease in resistance across the device, resulting in a cathodically biased film. The cathodic bias of the films was utilized for the subsequent reduction of Ag(NO{sub 3}) into silver (Ag) nanoparticles, forming a ternary Ag-(SA-RGO-TiO{sub 2}) composite. Various aspects of the self assembled films, their photoconductive properties as well as potential applications will be presented.
The performance of the Neosonic polymer Li-ion battery was measured using a number of tests including capacity, capacity as a function of temperature, ohmic resistance, spectral impedance, hybrid pulsed power test, utility partial state of charge (PSOC) pulsed cycle test, and an over-charge/voltage abuse test. The goal of this work was to evaluate the performance of the polymer Li-ion battery technology for utility applications requiring frequent charges and discharges, such as voltage support, frequency regulation, wind farm energy smoothing, and solar photovoltaic energy smoothing. Test results have indicated that the Neosonic polymer Li-ion battery technology can provide power levels up to the 10C{sub 1} discharge rate with minimal energy loss compared to the 1 h (1C) discharge rate. Two of the three cells used in the utility PSOC pulsed cycle test completed about 12,000 cycles with only a gradual loss in capacity of 10 and 13%. The third cell experienced a 40% loss in capacity at about 11,000 cycles. The DC ohmic resistance and AC spectral impedance measurements also indicate that there were increases in impedance after cycling, especially for the third cell. Cell No.3 impedance Rs increased significantly along with extensive ballooning of the foil pouch. Finally, at a 1C (10 A) charge rate, the over charge/voltage abuse test with cell confinement similar to a multi cell string resulted in the cell venting hot gases at about 45 C 45 minutes into the test. At 104 minutes into the test the cell voltage spiked to the 12 volt limit and continued out to the end of the test at 151 minutes. In summary, the Neosonic cells performed as expected with good cycle-life and safety.
Nuclear Posture Review (NPR) is designed to make world safer by reducing the role of U.S. nuclear weapons and reducing the salience of nuclear weapons. U.S. also seeks to maintain a credible nuclear deterrent and reinforce regional security architectures with missile defenses and other conventional military capabilities. But recent studies suggest that nuclear proliferation is a direct response to the perceived threat of U.S. conventional capabilities not U.S. nuclear stockpile. If this is true, then the intent of the NPR to reduce the role and numbers of nuclear weapons and strengthen conventional military capabilities may actually make the world less safe. First stated objective of NPR is to reduce the role and numbers of U.S. nuclear weapons, reduce the salience of nuclear weapons and move step by step toward eliminating them. Second stated objective is a reaffirmation of U.S. commitment to maintaining a strong deterrent which forms the basis of U.S. assurances to allies and partners. The pathway - made explicit throughout the NPR - for reducing the role and numbers of nuclear weapons while maintaining a credible nuclear deterrent and reinforcing regional security architectures is to give conventional forces and capabilities and missile defenses (e.g. non-nuclear elements) a greater share of the deterrence burden.
The authors have detected magnetic fields from the human brain with a compact, fiber-coupled rubidium spin-exchange-relaxation-free magnetometer. Optical pumping is performed on the D1 transition and Faraday rotation is measured on the D2 transition. The beams share an optical axis, with dichroic optics preparing beam polarizations appropriately. A sensitivity of <5 fT/{radical}Hz is achieved. Evoked responses resulting from median nerve and auditory stimulation were recorded with the atomic magnetometer. Recordings were validated by comparison with those taken by a commercial magnetoencephalography system. The design is amenable to arraying sensors around the head, providing a framework for noncryogenic, whole-head magnetoencephalography.
The role of crystal coherence length on the infrared optical response of MgO thin films was investigated with regard to Reststrahlen band photon-phonon coupling. Preferentially (001)-oriented sputtered and evaporated ion-beam assisted deposited thin films were prepared on silicon and annealed to vary film microstructure. Film crystalline coherence was characterized by x-ray diffraction line broadening and transmission electron microscopy. The infrared dielectric response revealed a strong dependence of dielectric resonance magnitude on crystalline coherence. Shifts to lower transverse optical phonon frequencies were observed with increased crystalline coherence. Increased optical phonon damping is attributed to increasing granularity and intergrain misorientation.
This paper compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that penalize more parameters with a log-likelihood term to reflect goodness of fit. Bayesian model selection involves calculating the posterior probability that each model is correct, given experimental data and prior probabilities that each model is correct. As part of this calculation, one often calibrates the parameters of each model and this is included in the Bayesian calculations. Our approach is demonstrated on a structural dynamics example with models for energy dissipation and peak force across a bolted joint. The three approaches are compared and the influence of the log-likelihood term in all approaches is discussed.
The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.
Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.
The precipitation of Ag{sub 2}Te in a PbTe matrix is investigated using electron microscopy and atom probe tomography. We observe the formation of oriented nanoscale Ag{sub 2}Te precipitates in PbTe. These precipitates initially form as coherent spherical nanoparticles and evolve into flattened semi-coherent disks during coarsening. This change in morphology is consistent with equilibrium shape theory for coherently strained precipitates. Upon annealing at elevated temperatures these precipitates eventually revert to an equiaxed morphology. We suggest this shape change occurs once the precipitates grow beyond a critical size, making it favorable to relieve the elastic coherency strains by forming interfacial misfit dislocations. These investigations of the shape and coherency of Ag{sub 2}Te precipitates in PbTe should prove useful in the design of nanostructured thermoelectric materials.
Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.
Cyber security analysis tools are necessary to evaluate the security, reliability, and resilience of networked information systems against cyber attack. It is common practice in modern cyber security analysis to separately utilize real systems of computers, routers, switches, firewalls, computer emulations (e.g., virtual machines) and simulation models to analyze the interplay between cyber threats and safeguards. In contrast, Sandia National Laboratories has developed novel methods to combine these evaluation platforms into a hybrid testbed that combines real, emulated, and simulated components. The combination of real, emulated, and simulated components enables the analysis of security features and components of a networked information system. When performing cyber security analysis on a system of interest, it is critical to realistically represent the subject security components in high fidelity. In some experiments, the security component may be the actual hardware and software with all the surrounding components represented in simulation or with surrogate devices. Sandia National Laboratories has developed a cyber testbed that combines modeling and simulation capabilities with virtual machines and real devices to represent, in varying fidelity, secure networked information system architectures and devices. Using this capability, secure networked information system architectures can be represented in our testbed on a single, unified computing platform. This provides an 'experiment-in-a-box' capability. The result is rapidly-produced, large-scale, relatively low-cost, multi-fidelity representations of networked information systems. These representations enable analysts to quickly investigate cyber threats and test protection approaches and configurations.
This 1/2 day workshop will survey various applications of XRD analysis, including in-situ analyses and neutron diffraction. The analyses will include phase ID, crystallite size and microstrain, preferred orientation and texture, lattice parameters and solid solutions, and residual stress. Brief overviews of high-temperature in-situ analysis, neutron diffraction and synchrotron studies will be included.
The analysis of networked activities is dramatically more challenging than many traditional kinds of analysis. A network is defined by a set of entities (people, organizations, banks, computers, etc.) linked by various types of relationships. These entities and relationships are often uninteresting alone, and only become significant in aggregate. The analysis and visualization of these networks is one of the driving factors behind the creation of the Titan Toolkit. Given the broad set of problem domains and the wide ranging databases in use by the information analysis community, the Titan Toolkit's flexible, component based pipeline provides an excellent platform for constructing specific combinations of network algorithms and visualizations.
This research explores the thermodynamics, economics, and environmental impacts of innovative, stationary, polygenerative fuel cell systems (FCSs). Each main report section is split into four subsections. The first subsection, 'Potential Greenhouse Gas (GHG) Impact of Stationary FCSs,' quantifies the degree to which GHG emissions can be reduced at a U.S. regional level with the implementation of different FCS designs. The second subsection, 'Optimizing the Design of Combined Heat and Power (CHP) FCSs,' discusses energy network optimization models that evaluate novel strategies for operating CHP FCSs so as to minimize (1) electricity and heating costs for building owners and (2) emissions of the primary GHG - carbon dioxide (CO{sub 2}). The third subsection, 'Optimizing the Design of Combined Cooling, Heating, and Electric Power (CCHP) FCSs,' is similar to the second subsection but is expanded to include capturing FCS heat with absorptive cooling cycles to produce cooling energy. The fourth subsection, - Thermodynamic and Chemical Engineering Models of CCHP FCSs,' discusses the physics and thermodynamic limits of CCHP FCSs.
Many possible applications requiring or benefiting from a wireless network are available for bolstering physical security and awareness at high security installations or facilities. These enhancements are not always straightforward and may require careful analysis, selection, tuning, and implementation of wireless technologies. In this paper, an introduction to wireless networks and the task of enhancing physical security is first given. Next, numerous applications of a wireless network are brought forth. The technical issues that arise when using a wireless network to support these applications are then discussed. Finally, a summary is presented.
Attractive for numerous technological applications, ferroelectronic oxides constitute an important class of multifunctional compounds. Intense experimental efforts have been made recently in synthesizing, processing and understanding ferroelectric nanostructures. This work will present the systematic characterization and optimization of barium titanate and lead lanthanum zirconate titanate nanoparticle based ceramics. The nanoparticles have been synthesized using several solution and pH-based synthesis processing routes and employed to fabricate polycrystalline ceramic and nanocomposite based components. The dielectric and ferroelectric properties of these various components have been gauged by impedance analysis and electromechanical response and will be discussed.
Technologies that have been developed for microelectromechanical systems (MEMS) have been applied to the fabrication of field desorption arrays. These techniques include the use of thick films for enhanced dielectric stand-off, as well as an integrated gate electrode. The increased complexity of MEMS fabrication provides enhanced design flexibility over traditional methods.
Nano-materials have shown unique crystallite-dependent properties that present distinct advantages for dielectric applications. PLZT is an excellent dielectric material used in several applications and may benefit crystallite engineering; however complex systems such as PLZT require well-controlled synthesis techniques. An aqueous based synthesis route has been developed, using standard precursor chemicals and scalable techniques to produce large batch sizes. The synthesis will be briefly covered, followed by a more in-depth discussion of incorporating nanocrystalline PLZT into a working device. Initial electrical properties will be presented illustrating the potential benefits and associated difficulties of working with PLZT nano-materials.
This report summarizes the current statistical analysis capability of OVIS and how it works in conjunction with the OVIS data readers and interpolators. It also documents how to extend these capabilities. OVIS is a tool for parallel statistical analysis of sensor data to improve system reliability. Parallelism is achieved using a distributed data model: many sensors on similar components (metaphorically sheep) insert measurements into a series of databases on computers reserved for analyzing the measurements (metaphorically shepherds). Each shepherd node then processes the sheep data stored locally and the results are aggregated across all shepherds. OVIS uses the Visualization Tool Kit (VTK) statistics algorithm class hierarchy to perform analysis of each process's data but avoids VTK's model aggregation stage which uses the Message Passing Interface (MPI); this is because if a single process in an MPI job fails, the entire job will fail. Instead, OVIS uses asynchronous database replication to aggregate statistical models. OVIS has several additional features beyond those present in VTK that, first, accommodate its particular data format and, second, improve the memory and speed of the statistical analyses. First, because many statistical algorithms are multivariate in nature and sensor data is typically univariate, interpolation of data is required to provide simultaneous observations of metrics. Note that in this report, we will refer to a single value obtained from a sensor as a measurement while a collection of multiple sensor values simultaneously present in the system is an observation. A base class for interpolation is provided that abstracts the operation of converting multiple sensor measurements into simultaneous observations. A concrete implementation is provided that performs piecewise constant temporal interpolation of multiple metrics across a single component. Secondly, because calculations may summarize data too large to fit in memory OVIS analyses batches of observations at a time and aggregates these intermediate intra-process models as it goes before storing the final model for inter-process aggregation via database replication. This reduces the memory footprint of the analysis, interpolation, and the database client and server query processing. This also interleaves processing with the disk I/O required to fetch data from the database - also improving speed. This report documents how OVIS performs analyses and how to create additional analysis components that fetch measurements from the database, perform interpolation, or perform operations on streamed observations (such as model updates or assessments). The rest of this section outlines the OVIS analysis algorithm and is followed by sections specific to each subtask. Note that we are limiting our discussion for now to the creation of a model from a set of measurements, and not including the assessment of observations using a model. The same framework can be used for assessment but that use case is not detailed in this report.
The observation and characterization of a single atom system in silicon is a significant landmark in half a century of device miniaturization, and presents an important new laboratory for fundamental quantum and atomic physics. We compare with multi-million atom tight binding (TB) calculations the measurements of the spectrum of a single two-electron (2e) atom system in silicon - a negatively charged (D-) gated Arsenic donor in a FinFET. The TB method captures accurate single electron eigenstates of the device taking into account device geometry, donor potentials, applied fields, interfaces, and the full host bandstructure. In a previous work, the depths and fields of As donors in six device samples were established through excited state spectroscopy of the D0 electron and comparison with TB calculations. Using self-consistent field (SCF) TB, we computed the charging energies of the D- electron for the same six device samples, and found good agreement with the measurements. Although a bulk donor has only a bound singlet ground state and a charging energy of about 40 meV, calculations show that a gated donor near an interface can have a reduced charging energy and bound excited states in the D- spectrum. Measurements indeed reveal reduced charging energies and bound 2e excited states, at least one of which is a triplet. The calculations also show the influence of the host valley physics in the two-electron spectrum of the donor.
This paper proposes a definition of 'IA and IA-enabled products' based on threat, as opposed to 'security services' (i.e., 'confidentiality, authentication, integrity, access control or non-repudiation of data'), as provided by Department of Defense (DoD) Instruction 8500.2, 'Information Assurance (IA) Implementation.' The DoDI 8500.2 definition is too broad, making it difficult to distinguish products that need higher protection from those that do not. As a consequence the products that need higher protection do not receive it, increasing risk. The threat-based definition proposed in this paper solves those problems by focusing attention on threats, thereby moving beyond compliance to risk management. (DoDI 8500.2 provides the definitions and controls that form the basis for IA across the DoD.) Familiarity with 8500.2 is assumed.