Structural assemblies often include bolted connections that are a primary mechanism for energy dissipation and nonlinear response at elevated load levels. Typically these connections are idealized within a structural dynamics finite element model as linear elastic springs. The spring stiffness is generally tuned to reproduce modal test data taken on a prototype. In conventional practice, modal test data is also used to estimate nominal values of modal damping that could be used in applications with load amplitudes comparable to those employed in the modal tests. Although this simplification of joint mechanics provides a convenient modeling approach with the advantages of reduced complexity and solution requirements, it often leads to poor predicted responses for load regimes associated with nonlinear system behavior. In this document we present an alternative approach using the concept of a "whole-joint" or "whole-interface" model [1]. We discuss the nature of the constitutive model, the manner in which model parameters are deduced, and comparison of structural dynamic prediction with results for experimental hardware subjected to a series of transient excitations beginning at low levels and increasing to levels that produced macro-slip in the joint. Further comparison is performed with a traditional "tuned" linear model. The ability of the whole-interface model to predict the onset of macro-slip as well as the vast improvement of the response levels in relation to those given by the linear model is made evident. Additionally, comparison between prediction and high amplitude experiments suggests areas for further work.
This paper addresses the coupling of experimental and finite element models of substructures. In creating the experimental model, difficulties exist in applying moments and estimating resulting rotations at the connection point between the experimental and finite element models. In this work, a simple test fixture for applying moments and estimating rotations is used to more accurately estimate these quantities. The test fixture is analytically "subtracted" from the model using the admittance approach. Inherent in this process is the inversion of frequency response function matrices that can amplify the uncertainty in the measured data. Presented here is the work applied to a two-component beam model and analyses that attempt to identify and quantify some of these uncertainties. The admittance model of one beam component was generated experimentally using the moment-rotation fixture, and the other from a detailed finite element model. During analytical testing of the admittance modeling algorithm, it was discovered that the component admittance models generated by finite elements were ill conditioned due to the inherent physics.
In order to create an analytical model of a material or structure, two sets of experiments must be performed-calibration and validation. Calibration experiments provide the analyst with the parameters from which to build a model that encompasses the behavior of the material. Once the model is calibrated, the new analytical results must be compared with a different, independent set of experiments, referred to as the validation experiments. This modeling procedure was performed for a crushable honeycomb material, with the validation experiments presented here. This paper covers the design of the validation experiments, the analysis of the resulting data, and the metric used for model validation.
Processing-in-Memory (PIM) technology encompasses a range of research leveraging a tight coupling of memory and processing. The most unique features of the technology are extremely wide paths to memory, extremely low memory latency, and wide functional units. Many PIM researchers are also exploring extremely fine-grained multi-threading capabilities. This paper explores a mechanism for leveraging these features of PIM technology to enhance commodity architectures in a seemingly mundane way: accelerating MPI. Modern network interfaces leverage simple processors to offload portions of the MPI semantics, particularly the management of posted receive and unexpected message queues. Without adding cost or increasing clock frequency, using PIMs in the network interface can enhance performance. The results are a significant decrease in latency and increase in small message bandwidth, particularly when long queues are present.
The processes and functional constituents of biological photosynthetic systems can be mimicked to produce a variety of functional nanostructures and nanodevices. The photosynthetic nanostructures produced are analogs of the naturally occurring photosynthetic systems and are composed of biomimetic compounds (e.g., porphyrins). For example, photocatalytic nanotubes can be made by ionic self-assembly of two oppositely charged porphyrins tectons [1]. These nanotubes mimic the light-harvesting and photosynthetic functions of biological systems like the chlorosomal rods and reaction centers of green sulfur bacteria. In addition, metal-composite nanodevices can be made by using the photocatalytic activity of the nanotubes to reduce aqueous metal salts to metal atoms, which are subsequently deposited onto tube surfaces [2]. In another approach, spatial localization of photocatalytic porphyrins within templating surfactant assemblies leads to controlled growth of novel dendritic metal nanostructures [3].
Conference Proceedings of the Society for Experimental Mechanics Series
Hasselman, Timothy; Wathugala, G.W.; Urbina, Angel; Paez, Thomas L.
Mechanical systems behave randomly and it is desirable to capture this feature when making response predictions. Currently, there is an effort to develop predictive mathematical models and test their validity through the assessment of their predictive accuracy relative to experimental results. Traditionally, the approach to quantify modeling uncertainty is to examine the uncertainty associated with each of the critical model parameters and to propagate this through the model to obtain an estimate of uncertainty in model predictions. This approach is referred to as the "bottom-up" approach. However, parametric uncertainty does not account for all sources of the differences between model predictions and experimental observations, such as model form uncertainty and experimental uncertainty due to the variability of test conditions, measurements and data processing. Uncertainty quantification (UQ) based directly on the differences between model predictions and experimental data is referred to as the "top-down" approach. This paper discusses both the top-down and bottom-up approaches and uses the respective stochastic models to assess the validity of a joint model with respect to experimental data not used to calibrate the model, i.e. random vibration versus sine test data. Practical examples based on joint modeling and testing performed by Sandia are presented and conclusions are drawn as to the pros and cons of each approach.
Achieving good scalability for large simulations based on structured adaptive mesh refinement is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Domainbased partitioners serve as a foundation for techniques designed to improve the scalability and they have traditionally been designed on the basis of an independence assumption regarding the computational flow among grid patches at different refinement levels. But this assumption does not hold in practice. Hence the effectiveness of these techniques is significantly impaired. This paper introduces a partitioning method designed on the true premises. The method is tested for four different applications exhibiting different behaviors. The results show that synchronization costs on average can he reduced by 75 percent. The conclusion is that the method is suitable as a foundation in general hierarchical methods designed to improve the scalability of structured adaptive mesh refinement applications.
This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical. Copyright 2005 ACM.
43rd AIAA Aerospace Sciences Meeting and Exhibit - Meeting Papers
Barone, Matthew F.; Roy, Christopher J.
Simulations of a low-speed square cylinder wake and a supersonic axisymmetric base wake are performed using the Detached Eddy Simulation (DES) model. A reduced-dissipation form of the Symmetric TVD scheme is employed to mitigate the effects of dissipative error in regions of smooth flow. The reduced-dissipation scheme is demonstrated on a 2D square cylinder wake problem, showing a dramatic increase in accuracy for a given grid resolution. The results for simulations on three grids of increasing resolution for the 3D square cylinder wake are compared to experimental data and to other LES and DES studies. The comparisons of mean flow and global mean flow quantities to experimental data are favorable, while the results for second order statistics in the wake are mixed and do not always improve with increasing spatial resolution. Comparisons to LES studies are also generally favorable, suggesting DES provides an adequate subgrid scale model. Predictions of base drag and centerline wake velocity for the supersonic wake are also good, given sufficient grid refinement. These cases add to the validation library for DES and support its use as an engineering analysis tool for accurate prediction of global flow quantities and mean flow properties.
In modal testing, the most popular tools for exciting a structure are hammers and shakers. This paper reviews the applications for which shakers have an advantage. In addition the advantages and disadvantages of different forcing inputs (e.g. sinusoidal, random, burst random and chirp) that can be applied with a shaker are noted. Special considerations are reported for the fixtures required for shaker testing (blocks, force gages, stingers) to obtain satisfactory results. Various problems that the author has encountered during single and multi-shaker modal tests are described with their solutions.
This paper provides an overview of several approaches to formulating and solving optimization under uncertainty (OUU) engineering design problems. In addition, the topic of high-performance computing and OUU is addressed, with a discussion of the coarse- and fine-grained parallel computing opportunities in the various OUU problem formulations. The OUU approaches covered here are: sampling-based OUU, surrogate model-based OUU, analytic reliability-based OUU (also known as reliability-based design optimization), polynomial chaos-based OUU, and stochastic perturbation-based OUU.
Latin Hypercube Sampling (LHS) is widely used as sampling based method for probabilistic calculations. This method has some clear advantages over classical random sampling (RS) that derive from its efficient stratification properties. However, one of its limitations is that it is not possible to extend the size of an initial sample by simply adding new simulations, as this will lead to a loss of the efficient stratification associated with LHS. We describe a new method to extend the size of an LHS to n (>=2) times its original size while preserving both the LHS structure and any induced correlations between the input parameters. This method involves introducing a refined grid for the original sample and then filling in empty rows and columns with new data in a way that conserves both the LHS structure and any induced correlations. An estimate of the bounds of the resulting correlation between two variables is derived for n=2. This result shows that the final correlation is close to the average of the correlations from the original sample and the new sample used in the infilling of the empty rows and columns indicated above.
Chemiresistor microsensors have been developed to provide continuous in-situ detection of volatile organic compounds (VOCs). The chemiresistor sensor is packaged in a rugged, waterproof housing that allows the device to detect VOCs in air, soil, and water. Preconcentrators are also being developed to enhance the sensitivity of the chemiresistor sensor. The "micro- hotplate" preconcentrator is placed face-to-face against the array of chemiresistors inside the package. At prescribed intervals, the preconcentrator is heated to desorb VOCs that have accumulated on the sorbent material on the one-micron-thick silicon-nitride membrane. The pulse of higher-than-ambient concentration of VOC vapor is then detected by the adjacent chemiresistors. The plume is allowed to diffuse out of the package through slots adjacent to the preconcentrator. The integrated chemiresistor/preconcentrator sensor has been tested in the laboratory to evaluate the impacts of sorbent materials, fabrication methods, and repeated heating cycles on the longevity and performance of the sensor. Calibration methods have also been developed, and field tests have been initiated. Copyright ASCE 2005.
Real-time water quality and chemical-specific sensors are becoming more commonplace in water distribution systems. The overall objective of the sensor network is to protect consumers from accidental and malevolent contamination events occurring within the distribution network. This objective can be quantified several different ways including: minimizing the amount of contaminated water consumed, minimizing the extent of the contamination within the network, minimizing the time to detection, etc. We examine the ability of a sensor network to meet these objectives as a function of both the detection limit of the sensors and the number of sensors in the network. A moderately-sized network is used as an example and sensors are placed randomly. The source term is a passive injection into a node and the resulting concentration in the node is a function of the volumetric flow through that node. The concentration of the contaminant at the source node is averaged for all time steps during the injection period. For each combination of a certain number of sensors and a detection limit, the mean values of the different objectives across multiple random sensor placements are evaluated. Results of this analysis allow the tradeoff between the necessary detection limit in a sensor and the number of sensors to be evaluated. Results show that for the example problem examined here, a sensor detection limit of 0.01 of the average source concentration is adequate for maximum protection. Copyright ASCE 2005.
We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micromachining. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology to the model. The 3D model is first separated into bodies that are non-intersecting, made from different materials or only linked through a ground plane. Next, for each body unique horizontal cross sections are located and arranged into a tree based on their topological relationship. A branch-wise search of the tree uncovers locations where deposition boundaries must lie and identifies candidate masks creating a generic mask set for the 3D model. Finally, in the last step specific process requirements are considered that may constrain the generic mask set.