Publications

9 Results

Search results

Jump to search filters

Human performance modeling for system of systems analytics :soldier fatigue

Lawton, Craig R.; Campbell, James E.; Miller, Dwight P.

The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

More Details

System of systems modeling and simulation

Cranwell, Robert M.; Campbell, James E.; Anderson, Dennis J.; Thompson, Bruce M.; Lawton, Craig R.; Shirah, Donald N.

Analyzing the performance of a complex System of Systems (SoS) requires a systems engineering approach. Many such SoS exist in the Military domain. Examples include the Army's next generation Future Combat Systems 'Unit of Action' or the Navy's Aircraft Carrier Battle Group. In the case of a Unit of Action, a system of combat vehicles, support vehicles and equipment are organized in an efficient configuration that minimizes logistics footprint while still maintaining the required performance characteristics (e.g., operational availability). In this context, systems engineering means developing a global model of the entire SoS and all component systems and interrelationships. This global model supports analyses that result in an understanding of the interdependencies and emergent behaviors of the SoS. Sandia National Laboratories will present a robust toolset that includes methodologies for developing a SoS model, defining state models and simulating a system of state models over time. This toolset is currently used to perform logistics supportability and performance assessments of the set of Future Combat Systems (FCS) for the U.S. Army's Program Manager Unit of Action.

More Details

System of systems modeling and analysis

Campbell, James E.; Anderson, Dennis J.; Shirah, Donald N.

This report documents the results of an LDRD program entitled 'System of Systems Modeling and Analysis' that was conducted during FY 2003 and FY 2004. Systems that themselves consist of multiple systems (referred to here as System of Systems or SoS) introduce a level of complexity to systems performance analysis and optimization that is not readily addressable by existing capabilities. The objective of the 'System of Systems Modeling and Analysis' project was to develop an integrated modeling and simulation environment that addresses the complex SoS modeling and analysis needs. The approach to meeting this objective involved two key efforts. First, a static analysis approach, called state modeling, has been developed that is useful for analyzing the average performance of systems over defined use conditions. The state modeling capability supports analysis and optimization of multiple systems and multiple performance measures or measures of effectiveness. The second effort involves time simulation which represents every system in the simulation using an encapsulated state model (State Model Object or SMO). The time simulation can analyze any number of systems including cross-platform dependencies and a detailed treatment of the logistics required to support the systems in a defined mission.

More Details

Algorithm development for Prognostics and Health Management (PHM)

Swiler, Laura P.; Campbell, James E.; Lowder, Kelly S.; Doser, Adele D.

This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

More Details

Automated analysis of failure event data

Campbell, James E.; Thompson, Bruce M.

This paper focuses on fully automated analysis of failure event data in the concept and early development stage of a semiconductor-manufacturing tool. In addition to presenting a wide range of statistical and machine-specific performance information, algorithms have been developed to examine reliability growth and to identify major contributors to unreliability. These capabilities are being implemented in a new software package called Reliadigm. When coupled with additional input regarding repair times and parts availability, the analysis software also provides spare parts inventory optimization based on genetic optimization methods. The type of question to be answered is: If this tool were placed with a customer for beta testing, what would be the optimal spares kit to meet equipment reliability goals for the lowest cost? The new algorithms are implemented in Windows{reg_sign} software and are easy to apply. This paper presents a preliminary analysis of failure event data from three IDEA machines currently in development. The paper also includes an optimal spare parts kit analysis.

More Details

Identification of components to optimize improvement in system reliability

Campbell, James E.

The fields of reliability analysis and risk assessment have grown dramatically since the 1970s. There are now bodies of literature and standard practices which cover quantitative aspects of system analysis such as failure rate and repair models, fault and event tree generation, minimal cut sets, classical and Bayesian analysis of reliability, component and system testing techniques, decomposition methods, etc. In spite of the growth in the sophistication of reliability models, however, little has been done to integrate optimization models within a reliability analysis framework. That is, often reliability models focus on characterization of system structure in terms of topology and failure/availability characteristics of components. A number of approaches have been proposed to help identify the components of a system that have the largest influence on overall system reliability. While this may help rank order the components, it does not necessarily help a system design team identify which components they should improve to optimize overall reliability (it may be cheaper and more effective to focus on improving two or three components of smaller importance than one component of larger importance). In this paper, we present an optimization model that identifies the components to be improved to maximize the increase in system MTBF, subject to a fixed budget constraint. A dual formulation of the model is to minimize cost, subject to achieving a certain level of system reliability.

More Details

Multiple weight stepwise regression

Campbell, James E.

In many science and engineering applications, there is an interest in predicting the outputs of a process for given levels of inputs. In order to develop a model, one could run the process (or a simulation of the process) at a number of points (a point would be one run at one set of possible input values) and observe the values of the outputs at those points. There observations can be used to predict the values of the outputs for other values of the inputs. Since the outputs are a function of the inputs, we can generate a surface in the space of possible inputs and outputs. This surface is called a response surface. In some cases, collecting data needed to generate a response surface can e very expensive. Thus, in these cases, there is a powerful incentive to minimize the sample size while building better response surfaces. One such case is the semiconductor equipment manufacturing industry. Semiconductor manufacturing equipment is complex and expensive. Depending upon the type of equipment, the number of control parameters may range from 10 to 30 with perhaps 5 to 10 being important. Since a single run can cost hundreds or thousands of dollars, it is very important to have efficient methods for building response surfaces. A current approach to this problem is to do the experiment in two stages. First, a traditional design (such as fractional factorial) is used to screen variables. After deciding which variables are significant, additional runs of the experiment are conducted. The original runs and the new runs are used to build a model with the significant variables. However, the original (screening) runs are not as helpful for building the model as some other points might have been. This paper presents a point selection scheme that is more efficient than traditional designs.

More Details

Algorithms for treating redundancy in repairable and non-repairable systems

Campbell, James E.

This report presents equations and computational algorithms for analyzing reliability of several forms of redundancy in repairable and non-repairable systems. For repairable systems, active, standby, and R of N redundancy with and without repair are treated. For non-repairable systems, active, standby, and R of N redundancy are addressed. These equations can be used to calculate mean time between failures, mean time to repair, and reliability for complex systems involving redundancy.

More Details

Application of generic risk assessment software to radioactive waste disposal

Campbell, James E.

Monte Carlo methods are used in a variety of applications such as risk assessment, probabilistic safety assessment, and reliability analysis. While Monte Carlo methods are simple to use, their application can be laborious. A new microcomputer software package has been developed that substantially reduces the effort requires to conduct Monte Carlo analyses. The Sensitivity and Uncertainty Analysis Shell (SUNS) is a software shell in the sense that a wide variety of application model can be incorporated into it. SUNS offers several useful features including a menu-driven environment, a flexible input editor, both Monte Carlo and Latin Hypercube sampling, the ability to perform both repeated trials and parametric studies in a single run, and both statistical and graphical output. SUNS also performs all required file management functions. 9 refs., 6 figs., 1 tab.

More Details
9 Results
9 Results