Sandia Information Sciences Initiative
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia has identified autonomy as a strategic initiative and an important area for providing national leadership. A key question is, “How might autonomy change how we think about the national security challenges we address and the kinds of solutions we deliver?” Three workshops at Sandia early in 2017 brought together internal stakeholders and potential academic partners in autonomy to address this question. The first focused on programmatic applications and needs. The second explored existing internal capabilities and research and development needs. This report summarizes the outcome of the third workshop, held March 3, 2017 in Albuquerque, NM, which engaged Academic Alliance partners in autonomy efforts at Sandia by discussing research needs and synergistic areas of interest within the complex systems and system modeling domains, and identifying opportunities for partnering on laboratory directed and other joint research opportunities.
This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We report on the use of a supercomputer simulation to study the performance sensitivity to systematic changes in the job parameters of run time, number of CPUs, and interarrival time. We also examine the effect of changes in share allocation and service ratio for job prioritization under a Fair Share queuing Algorithm to see the effect on facility figures of merit. We used log data from the ASCI supercomputer Blue Mountain and the ASCI simulator BIRMinator to perform this study. The key finding is that the performance of the supercomputer is quite sensitive to all the job parameters with the interarrival rate of the jobs being most sensitive at the highest rates and increasing run times the least sensitive job parameter with respect to utilization and rapid turnaround. We also find that this facility is running near its maximum practical utilization. Finally, we show the importance of the use of simulation in understanding the performance sensitivity of a supercomputer.
Proceedings - IEEE International Conference on Cluster Computing, ICCC
This paper presents an analysis of utilizing unused cycles on supercomputers through the use of many small jobs. What we call "interstitial computing," is important to supercomputer centers for both productivity and political reasons. Interstitial computing makes use of the fact that small jobs are more or less fungible consumers of compute cycles that are more efficient for bin packing than the typical jobs on a supercomputer. An important feature of interstitial computing is that it not have a significant impact on the makespan of native jobs on the machine. Also, a facility can obtain higher utilizations that may only be otherwise possible with more complicated schemes or with very long wait times. The key contribution of this paper is that it provides theoretical and empirical guidelines for users and administrators for how currently unused supercomputer cycles may be exploited. We find that that interstitial computing is a more effective means for increasing machine utilization than increasing native job run times or size.
Proceedings - CCGrid 2003: 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid
Proceedings of the Hawaii International Conference on System Sciences
In manufacturing, the conceptual design and detailed design stages are typically regarded as sequential and distinct. Decisions made in conceptual design are often made with little information as to how they would affect detailed design or manufacturing process specification. Many possibilities and unknowns exist in conceptual design where ideas about product shape and functionality are changing rapidly. Few if any tools exist to aid in this difficult, amorphous stage in contrast to the many CAD and analysis tools for detailed design where much more is known about the final product. The Materials Process Design Environment (MPDE) is a collaborative problem solving environment (CPSE) that was developed so geographically dispersed designers in both the conceptual and detailed stage can work together and understand the impacts of their design decisions on functionality, cost and manufacturability.
As part of a computerized system (SmartWeld) developed at Sandia National Laboratories to facilitate agile manufacturing of welded assemblies, a weld schedule database (WSDB) was also developed. SmartWeld`s overall goals are to shorten the design-to-product time frame and to promote right-the-first-time weldment design and manufacture by providing welding process selection guidance to component designers. The associated WSDB evolved into a substantial subproject by itself. At first, it was thought that the database would store perhaps 50 parameters about a weld schedule. This was a woeful underestimate: the current WSDB has over 500 parameters defined in 73 tables. This includes data bout the weld, the piece parts involved, the piece part geometry, and great detail about the schedule and intervals involved in performing the weld. This complex database was built using information modeling techniques. Information modeling is a process that creates a model of objects and their roles for a given domain (i.e. welding). The Natural-Language Information Analysis methodology (NIAM) technique was used, which is characterized by: (1) elementary facts being stated in natural language by the welding expert, (2) determinism (the resulting model is provably repeatable, i.e. it gives the same answer every time), and (3) extensibility (the model can be added to without changing existing structure). The information model produced a highly normalized relational schema that was translated to Oracle{trademark} Relational Database Management Systems for implementation.
Results from SmartWeld`s first working session involving in-progress designs is presented. The Welding Advisor component of SmartWeld was thoroughly exercised, evaluated all eleven welds of the selected part. The Welding Advisor is an expert system implemented with object-oriented techniques for knowledge representation. With two welding engineers in attendance, the recommendations of the Welding Advisor were thoroughly examined and critiqued for accuracy and for areas of improvement throughout the working session. The Weld Schedule Database component of SmartWeld was also exercised. It is a historical archive of proven, successful weld schedules that can be intelligently searched using the current context of SmartWeld`s problem solving state. On all eleven welds, the experts agreed that Welding Advisor recommended the most risk free options. As a result of the Advisor`s recommendation, six welds agreed completely with the experts, two welds had their joint geometry modified for production, and three welds were not modified but extra care was exercised during welding. 25 figs., 3 tabs.
Expert system implementation can take numerous forms ranging form traditional declarative rule-based systems with if-then syntax to imperative programming languages that capture expertise in procedural code. The artificial intelligence community generally thinks of expert systems as rules or rule-bases and an inference engine to process the knowledge. The welding advisor developed at Sandia National Laboratories and described in this paper deviates from this by codifying expertise using object representation and methods. Objects allow computer scientists to model the world as humans perceive it giving us a very natural way to encode expert knowledge. The design of the welding advisor, which generates and evaluates solutions, will be compared and contrasted to a traditional rule- based system.