Publications

6 Results

Search results

Jump to search filters

Machine learning at the edge to improve in-field safeguards inspections

Annals of Nuclear Energy

Shoman, Nathan; Williams, Kyle A.; Balsara, Burzin; Ramakrishnan, Adithya; Kakish, Zahi K.; Coram, Jamie L.; Honnold, Philip H.; Rivas, Tania; Smartt, Heidi A.

Artificial intelligence (AI) and machine learning (ML) are near-ubiquitous in day-to-day life; from cars with automated driver-assistance, recommender systems, generative content platforms, and large language chatbots. Implementing AI as a tool for international safeguards could significantly decrease the burden on safeguards inspectors and nuclear facility operators. The use of AI would allow inspectors to complete their in-field activities quicker, while identifying patterns and anomalies and freeing inspectors to focus on the uniquely human component of inspections. Sandia National Laboratories has spent the past two and a half years developing on-device machine learning to develop both a digital and robotic assistant. This combined platform, which we term INSPECTA, has numerous on-device machine learning capabilities that have been demonstrated at the laboratory scale. This work describes early successes implementing AI/ML capabilities to reduce the burden of tedious inspector tasks such as seal examination, information recall, note taking, and more.

More Details

Effectiveness of Warm-Start PPO for Guidance with Highly Constrained Nonlinear Fixed-Wing Dynamics

Proceedings of the American Control Conference

Coletti, Christian; Williams, Kyle A.; Lehman, Hannah C.; Kakish, Zahi K.; Whitten, William D.; Parish, Julie M.

Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.

More Details

Inspecta Annual Technical Report

Smartt, Heidi A.; Coram, Jamie L.; Dorawa, Sydney D.; Laros, James H.; Honnold, Philip H.; Kakish, Zahi K.; Pickett, Chris A.; Shoman, Nathan; Spence, Katherine P.

Sandia National Laboratories (SNL) is designing and developing an Artificial Intelligence (AI)-enabled smart digital assistant (SDA), Inspecta (International Nuclear Safeguards Personal Examination and Containment Tracking Assistant). The goal is to provide inspectors an in-field digital assistant that can perform tasks identified as tedious, challenging, or prone to human error. During 2021, we defined the requirements for Inspecta based on reviews of International Atomic Energy Agency (IAEA) publications and interviews with former IAEA inspectors. We then mapped the requirements to current commercial or open-source technical capabilities to provide a development path for an initial Inspecta prototype while highlighting potential research and development tasks. We selected a highimpact inspection task that could be performed by an early Inspecta prototype and are developing the initial architecture, including hardware platform. This paper describes the methodology for selecting an initial task scenario, the first set of Inspecta skills needed to assist with that task scenario and finally the design and development of Inspecta’s architecture and platform.

More Details
6 Results
6 Results