When estimating parameters for a material model from experimental data collected during a separate effects physics experiment, the quality of fit is only a part of the required data. Also necessary is the uncertainty in the estimated parameters so that uncertainty quantification and model validation can be performed at the full system level. The uncertainty and quality of fit of the data are many times not available and should be considered when fitting the data to a specified model. There are many techniques available to fit data to a material model and a few of them are presented in this work using a simple acoustical emission dataset. The estimated parameters and the affiliated uncertainty will be estimated using a variety of techniques and compared.
Although interdisciplinary research attracts more and more interest and effort, the benefits of this type of research are not always realized. To understand when expertise diversity will have positive or negative effects on research efforts, we examine how expertise diversity and diversity salience affect task conflict and idea sharing in interdisciplinary research groups. Using data from 148 researchers in 29 academic research labs, we provide evidence on the importance of social categorization states (i.e., expertise diversity salience) in understanding both the information processes (i.e., task conflict) and the creativity processes (i.e., idea sharing) in groups with expertise diversity. We show that expertise diversity can either increase or decrease task conflict depending on the salience of group members' expertise in a curvilinear way: at a medium level of expertise diversity the moderating effect of diversity salience is strongest. Furthermore, enriched group work design can strengthen the benefits of task conflict for creative idea sharing only when expertise diversity salience is low. Finally, we show that idea sharing predicts group performance in interdisciplinary academic research labs over and above task conflict.
The process of rank aggregation is intimately intertwined with the structure of skew-symmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skew-symmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of our idea is that a rank aggregation describes a partially filled skew-symmetric matrix. We extend an algorithm for matrix completion to handle skew-symmetric data and use that to extract ranks for each item. Our algorithm applies to both pairwise comparison and rating data. Because it is based on matrix completion, it is robust to both noise and incomplete data. We show a formal recovery result for the noiseless case and present a detailed study of the algorithm on synthetic data and Netix ratings. Copyright 2011 ACM.
This work presents a methodology based on the concept of error in constitutive equations for the inverse reconstruction of viscoelastic properties using steady-state dynamics. The ECE algorithm presented herein consists of two main steps. In the first step, kinematically admissible strains and dynamically admissible stresses are generated through two auxiliary forward problems. In the second step, a new update of the complex shear and bulk moduli as functions of frequency are obtained by minimizing an ECE functional that measures the discrepancy between the kinematically admissible strains and the dynamically admissible stresses. The feasibility of the methodology is demonstrated through two numerical experiments. It was found that the magnitude and phase of the complex shear modulus can be accurately reconstructed in the presence of noise, while the magnitude of the bulk modulus is more sensitive to noise and can be reconstructed with less accuracy, in general, than the shear modulus. Furthermore, the phase of the bulk modulus, which is related to energy dissipation, can be accurately reconstructed.
There occasionally occur situations in field measurements where direct optical access to the area of interest is not possible. In these cases the borescope is the standard method of imaging. Furthermore, if shape, displacement, or strain are desired in these hidden locations, it would be advantageous to be able to do digital image correlation (DIC) through the borescope. This paper will present the added complexities and errors associated with imaging through a borescope for DIC. Discussion of non-radial distortions and their effects on the measurements, along with a possible correction scheme will be discussed.
A cantilever beam is released from an initial condition. The velocity at the tip is recorded using a laser Doppler vibrometer. The ring-down time history is analyzed using Hilbert transform, which gives the natural frequency and damping. An important issue with the Hilbert transform is vulnerability to noise. The proposed method uses curve fitting to replace some time-differentiation and suppress noise. Linear curve fitting gives very good results for linear beams with low damping. For nonlinear beams with higher damping, polynomial curve fitting captures the time variations. The method was used for estimating quality factors of a few shim metals and PZT bimorphs.
Several commercial computational fluid dynamics (CFD) codes now have the capability to analyze Eulerian two-phase flow using the Rohsenow nucleate boiling model. Analysis of boiling due to one-sided heating in plasma facing components (pfcs) is now receiving attention during the design of water-cooled first wall panels for ITER that may encounter heat fluxes as high as 5 MW/m2. Empirical thermalhydraulic design correlations developed for long fission reactor channels are not reliable when applied to pfcs because fully developed flow conditions seldom exist. Star-CCM+ is one of the commercial CFD codes that can model two-phase flows. Like others, it implements the RPI model for nucleate boiling, but it also seamlessly transitions to a volume-of-fluid model for film boiling. By benchmarking the results of our 3d models against recent experiments on critical heat flux for both smooth rectangular channels and hypervapotrons, we determined the six unique input parameters that accurately characterize the boiling physics for ITER flow conditions under a wide range of absorbed heat flux. We can now exploit this capability to predict the onset of critical heat flux in these components. In addition, the results clearly illustrate the production and transport of vapor and its effect on heat transfer in pfcs from nucleate boiling through transition to film boiling. This article describes the boiling physics implemented in CCM+ and compares the computational results to the benchmark experiments carried out independently in the United States and Russia. Temperature distributions agreed to within 10 °C for a wide range of heat fluxes from 3 MW/m2 to 10 MW/m2 and flow velocities from 1 m/s to 10 m/s in these devices. Although the analysis is incapable of capturing the stochastic nature of critical heat flux (i.e., time and location may depend on a local materials defect or turbulence phenomenon), it is highly reliable in determining the heat flux where boiling instabilities begin to dominate. Beyond this threshold, higher heat fluxes lead to the boiling crisis and eventual burnout. This predictive capability is essential in determining the critical heat flux margin for the design of complex 3d components.
Data have been acquired from a spanwise array of fluctuating wall pressure sensors beneath a wind tunnel wall boundary layer at Mach 2, then invoking Taylor's Hypothesis allows the temporal signals to be converted into a spatial map of the wall pressure field. Improvements to the measurement technique were developed to establish the veracity of earlier tentative conclusions. An adaptive filtering scheme using a reference sensor was implemented to cancel effects of wind tunnel acoustic noise and vibration. Coherent structures in the pressure fields were identified using an improved thresholding algorithm that reduced the occurrence of broken contours and spurious signals. Analog filters with sharper frequency cutoffs than digital filters produced signals of greater spectral purity. Coherent structures were confirmed in the fluctuating wall pressure field that resemble similar structures known to exist in the velocity field, in particular by exhibiting a spanwise meander and merging of events. However, the pressure data lacked the common spanwise alternation of positive and negative events found in velocity data, and conversely demonstrated a weak positive correlation in the spanwise direction.
International Defense and Homeland Security Simulation Workshop, DHSS 2011, Held at the International Mediterranean and Latin American Modeling Multiconference, I3M 2011
In the present paper the act of learner reflection during training with an adaptive or predictive computer-based tutor is considered a learner-system interaction. Incorporating reflection and real-time evaluation of peer performance into adaptive and predictive computerbased tutoring can support the development of automated adaptation. Allowing learners to refine and inform student models from reflective practice with independent open learner models may improve overall accuracy and relevancy. Given the emphasis on selfdirected peer learning with adaptive technology, learner and instructor modeling research continue to be critical research areas for education and training technology.
The chemical industry is one of the largest industries in the United States and a vital contributor to global chemical supply chains. The U.S. Department of Homeland Security (DHS) Science and Technology Directorate has tasked Sandia National Laboratories (Sandia) with developing an analytical capability to assess interdependencies and complexities of the nation's critical infrastructures on and with the chemical sector. This work is being performed to expand the infrastructure analytical capabilities of the National Infrastructure Simulation and Analysis Center (NISAC). To address this need, Sandia has focused on development of an agent-based methodology towards simulating the domestic chemical supply chain and determining economic impacts resulting from large-scale disruptions to the chemical sector. Modeling the chemical supply chain is unique because the flow of goods and services are guided by process thermodynamics and reaction kinetics. Sandia has integrated an agent-based microeconomic simulation tool N-ABLETM with various chemical industry datasets to abstract the chemical supply chain behavior. An enterprise design within N-ABLETM consists of a collection of firms within a supply chain network; each firm interacts with others through chemical reactions, markets, and physical infrastructure. The supply and demand within each simulated network must be consistent with respect to mass balances of every chemical within the network. Production decisions at every time step are a set of constrained linear program (LP) solutions that minimize the difference between desired and actual outputs. We illustrate the methodology with examples of modeled petrochemical supply chains under an earthquake event. The supply chain impacts of upstream and downstream chemicals associated with organic intermediates after a short-term shutdown in the affected area are discussed.