Empirical studies suggest that consumption is more sensitive to current income than suggested under the permanent income hypothesis, which raises questions regarding expectations for future income, risk aversion, and the role of economic confidence measures. This report surveys a body of fundamental economic literature as well as burgeoning computational modeling methods to support efforts to better anticipate cascading economic responses to terrorist threats and attacks. This is a three part survey to support the incorporation of models of economic confidence into agent-based microeconomic simulations. We first review broad underlying economic principles related to this topic. We then review the economic principle of confidence and related empirical studies. Finally, we provide a brief survey of efforts and publications related to agent-based economic simulation.
Simulations within density functional theory (DFT) are a common component of research into the physics of materials. With the broad success of DFT, it is easily forgotten that computational DFT methods invariably do not directly represent simulated properties, but require careful construction of models that are computable approximations to a physical property. Perhaps foremost among these computational considerations is the routine use of the supercell approximation to construct finite models to represent infinite systems. Pitfalls in using supercells (k-space sampling, boundary conditions, cell sizes) are often underappreciated. We present examples (e.g. vacancy defects) that exhibit a surprising or significant dependence on supercells, and describe workable solutions. We describe procedures needed to construct meaningful models for simulations of real material systems, focusing on k-space and cell size issues.
Supercomputer architects strive to maximize the performance of scientific applications. Unfortunately, the large, unwieldy nature of most scientific applications has lead to the creation of artificial benchmarks, such as SPEC-FP, for architecture research. Given the impact that these benchmarks have on architecture research, this paper seeks an understanding of how they relate to real-world applications within the Department of Energy. Since the memory system has been found to be a particularly key issue for many applications, the focus of the paper is on the relationship between how the SPEC-FP benchmarks and DOE applications use the memory system. The results indicate that while the SPEC-FP suite is a well balanced suite, supercomputing applications typically demand more from the memory system and must perform more 'other work' (in the form of integer computations) along with the floating point operations. The SPEC-FP suite generally demonstrates slightly more temporal locality leading to somewhat lower bandwidth demands. The most striking result is the cumulative difference between the benchmarks and the applications in terms of the requirements to sustain the floating-point operation rate: the DOE applications require significantly more data from main memory (not cache) per FLOP and dramatically more integer instructions per FLOP.
We compare inexact Newton and coordinate descent optimization methods for improving the quality of a mesh by repositioning the vertices, where the overall quality is measured by the harmonic mean of the mean-ratio metric. The effects of problem size, element size heterogeneity, and various vertex displacement schemes on the performance of these algorithms are assessed for a series of tetrahedral meshes.
Complex simulations (in particular, those involving multiple coupled physics) cannot be understood solely using geometry-based visualizations. Such visualizations are necessary in interpreting results and gaining insights into kinematics, however they are insufficient when striving to understand why or how something happened, or when investigating a simulation's dynamic evolution. For multiphysics simulations (e.g. those including solid dynamics with thermal conduction, magnetohydrodynamics, and radiation hydrodynamics) complex interactions between physics and material properties take place within the code which must be investigated in other ways. Drawing on the extensive previous work in view coordination, brushing and linking techniques, and powerful visualization libraries, we have developed Prism, an application targeted for a specific analytic need at Sandia National Laboratories. This multiview scientific visualization tool tightly integrates geometric and phase space views of simulation data and material models. Working closely with analysts, we have developed this production tool to promote understanding of complex, multiphysics simulations. We discuss the current implementation of Prism, along with specific examples of results obtained by using the tool.
We present an exchange-correlation functional that enables an accurate treatment of systems with electronic surfaces. The functional is developed within the subsystem functional paradigm [1], combining the local density approximation for interior regions with a new functional designed for surface regions. It is validated for a variety of materials by calculations of: (i) properties where surface effects exist, and (ii) established bulk properties. Good and coherent results are obtained, indicating that this functional may serve well as universal first choice for solid state systems. The good performance of this first subsystem functional also suggests that yet improved functionals can be constructed by this approach.
We develop a specialized treatment of electronic surface regions which, via the subsystem functional approach [1], can be used in functionals for self-consistent density-functional theory (DFT). Approximations for both exchange and correlation energies are derived for an electronic surface. An interpolation index is used to combine this surface-specific functional with a functional for interior regions. When the local density approximation (LDA) is used for the interior region, the end result is a straightforward density-gradient dependent functional that shows promising results. Further improvement of the treatment of the interior region by the use of a local gradient expansion approximation is also discussed.
We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.