The 9th Annual Sandia Machine Learning/Deep Learning (MLDL) Workshop, hosted from Albuquerque, New Mexico was a success! Thank you to all presenters and participants! Check back for more information for the upcoming 2026 workshop!
SCHEDULE
The workshop aims to provide a platform for members of the MLDL research community to network, learn, and share their work allowing participants to gain in-depth exposure to the field of MLDL. There is no cost to attend the workshop.
MIXERS
Each year, the workshop features mixers that are open to the public. Stay tuned for upcoming details regarding the dates and times for the 2026 workshop mixers.
PRESENTERS
PURPOSE
- Present work across various national laboratories and collaborators in machine learning (ML) and deep learning (DL).
- Network with staff and management at Sandia focused on work in these areas and discuss ideas for potential new work and collaboration.
- Teach and test MLDL methods via tutorials for those interested in getting some hands-on experience with machine learning and deep learning.
ATTENDANCE
2025 KEYNOTE SPEAKERS

The Hon. Dr. Dimitri Kusnezov is VP of Science and Technology for FutureSafe, Nuclear Threat Initiative. Dr. Kusnezov has served in numerous scientific and national security positions including Former Under Secretary for Science and Technology for the U.S. Department of Homeland Security (DHS); Former Deputy Under Secretary for Artificial Intelligence (AI) & Technology for the U.S. Department of Energy Department (DOE); Former National Nuclear Security Administration (NNSA) Chief Scientist serving as Director of both the Advanced Simulation and Computing and the National Security Science, Technology and Engineering programs. He has created numerous government programs, working across agencies, and with international partners, private sector, and philanthropic entities.
Prior to government service, Dr. Kusnezov served on the Yale University faculty for more than a decade as a professor in Theoretical Physics. He earned his MS in Physics and PhD in Theoretical Nuclear Physics at Princeton University and received Bachelor of Arts degrees in Physics and in Pure Mathematics with highest honors from UC Berkeley.
Title: AI and Emerging Technologies in an Era of Rapid Change
We are in the early days of a transformational time without historical context. The pace of innovation across dozens of emerging technologies, their growing democratization and convergence, are resetting many definitions and assumptions, as well as their impacts to safety and security. In this talk, I will survey examples of what has been done and discuss opportunities. This will include some thoughts on what needs to come beyond AI and the unique roles of the NNSA laboratories.

Melanie E. Moses is a Professor of Computer Science and Biology at the University of New Mexico (UNM) and an External Faculty Member at the Santa Fe Institute. She earned a B.S. from Stanford University in Symbolic Systems and a PhD in Biology from UNM. Her interdisciplinary research crosses the boundaries of Computer Science and Biology by modeling search processes in complex adaptive systems such as ants collecting food and immune systems responding to viral infection. Her bio-inspired approach to computation has built of swarms of ant-like robots that autonomously cooperate to forage for resources, and bird-like UAVs that monitor environmental conditions, including gas emissions from volcanoes.
She has mentored dozens of graduate and undergraduate students, and led NM CSforAll and the NASA Swarmathon to engage thousands of students in computer science from high school through graduate school. She co-founded the UNM-SFI Working Group on Algorithmic Justice and is on the leadership team of the UNM ADVANCE program to support faculty success. She currently serves on the Computing Research Association’s Widening Participation board, is an Advisor to the Vice President for Research for Artificial Intelligence and chairs the New Mexico AI Consortium.
Intelligence at Scale: Energy and Infrastructure for Trustworthy AI
Recent advances in Artificial Intelligence rest on decades of research, an extraordinary set of tools, techniques and computational infrastructure and enormous investments in electricity. In the first part of this talk, Dr. Moses will examine how much energy today’s AI systems consume, placing those demands in the context of historical scaling trends including metabolic scaling of animal brains, energetic efficiency of computing hardware, and the energy use of human societies. Understanding how AI extends an evolutionary arc of harnessing energy to process information suggests both limits and opportunities as we design increasingly intelligent systems.
In the second part, Dr. Moses will turn to trustworthiness as a central challenge in AI. As we build AI at unprecedented scale, how do we ensure that it operates reliably and aligns with scientific and societal goals? She will contrast domains where AI has made enormous advances, particularly focused on rapid improvements in AI writing code, with domains where LLM hallucinations, inaccuracies, biases, and disregard for truth pose significant risks. Finally, she will highlight strategies—technical, organizational and social—for mitigating weaknesses so that we can harness the power of large‑scale AI systems safely and responsibly.

Yisroel Mirksy is a tenured Assistant Professor and Zuckerman Faculty Scholar in the Department of Software and Information Systems Engineering at Ben-Gurion University (BGU). He is the head of the Offensive AI Research Lab at CBG. He received his PhD from BGU in 2018. His main research interests include deepfakes, adversarial machine learning, anomaly detection, and intrusion detection. Dr. Mirsky has published his work in some of the best security venues: USENIX, CCS, NDSS, Euro S&P, Black Hat, DEF CON, RSA, CSF, AISec, etc. His research has also been featured in many well-known media outlets: Popular Science, Scientific American, Wired, The Wall Street Journal, Forbes, and BBC.
Offensive AI: The Dark Side of Intelligent Systems
As artificial intelligence becomes more powerful, so do the techniques used to exploit it. Offensive AI refers to the use—or abuse—of machine learning systems to undermine security, compromise privacy, and erode trust. In this talk, we will explore how adversaries are weaponizing AI across a range of attack surfaces.
Drawing on recent work from our lab presented at USENIX, NDSS, and DEF CON, we’ll cover real-world attacks such as model and data extraction, privacy leakage through side channels, and the misuse of generative models in deepfakes—from real-time impersonation to medical and biometric deception. We’ll also dive into the growing threat of AI-driven cyberattack automation, including voice and chat scams powered by large language models.
This session will offer technical insights into how these attacks work, their implications for the broader security landscape, and what researchers and practitioners can do to stay ahead of evolving threats.

