AISB/AIxIA Spotlight Seminar on AI – Alessio Lomuscio

Title: Towards Verification of Neural Systems.

Speaker: Alessio Lomuscio, PhD.
Imperial College London.
Safe Intelligence.

21 March, 5pm CET (4pm GMT)

Abstract:
A major challenge in deploying ML-based systems, such as ML-based computer vision, is the inherent difficulty in ensuring their performance in the operational design domain. The standard approach consists in extensively testing models against a wide collection of inputs. However, testing is inherently limited in coverage, and it is expensive in several domains.

Novel verification methods provide guarantees that a neural model meets its specifications in dense neighbourhood of selected inputs. For example, by using verification methods we can establish whether a model is robust with respect to infinitely many re-illumination changes, or particular noise patterns in the vicinity to an input. Verification methods can also be tailored to specifications in the latent space and establish the robustness of models against semantic perturbations not definable in the input space (3D pose changes, background changes, etc). Additionally, verification methods can be paired with learning to obtain robust learning methods capable of generating models inherently more robust than those that may be derived with standard methods.

In this presentation I will succinctly cover the key theoretical results leading to some of the present ML verification technology, illustrate the resulting toolsets and capabilities, and describe some of the use cases developed with our colleagues at Boeing Research, including centerline distance estimation, object detection, and runway detection.

I will argue that verification and robust learning can be used to obtain models that are inherently more robust than present learning and testing approaches, thereby unlocking the deployment of applications in society critical capplications.

Bio:
Alessio Lomuscio
is Professor of Safe Artificial Intelligence at Imperial College London (UK), where he leads the Safe AI Lab. He is a Distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. He is founding co-director of the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence.

Alessio’s research interests concern the development of verification methods for artificial intelligence. Since 2000 he has pioneered the development of formal methods for the verification of autonomous systems and multi-agent systems, both symbolic and ML-based. He has published over 200 papers in leading AI and formal methods conferences and journals.

He is the founder and CEO of Safe Intelligence, a VC-backed Imperial College London spinout helping users build and assure robust ML systems.

Tags: No tags

Add a Comment

You must be logged in to post a comment