The recent use of machine learning in high stakes applications has been pushing many industrial actors to rethink how safety-critical systems (such as planes or cars) can be certified before being manufactured and used. Key questions have emerged, such as: how to properly define safety of systems with learning components? how to formally guarantee safety? which new mathematical guarantees would be needed from the ML research community?
This workshop will bring together machine learning researchers with international authorities and industrial experts from sectors where certification and reliability is a critical issue. It will consist of invited talks, a poster session, and group discussions. The goal is to present key open industrial questions, traditional methods in critical software verification and certification (and AI-related challenges), as well as an introduction to several promising mathematical theories (distribution-free uncertainty quantification, deep learning theory, formal methods, and rigorous numerics).
This workshop is organized by the DEEL project. We hope this workshop will help shape the future research agenda toward the middle-term objective of certifying critical systems involving AI components.
Our workshop has a dedicated poster track. This session offers a unique opportunity to showcase advanced research towards certification of critical AI-based systems. Encouraged topics for contributions include probabilistic risk assessment, formal methods for neural networks, distribution-free uncertainty quantification, generalization guarantees for deep learning, robustness to adversarial attacks, distributional shift or model-misspecification, among others.
You are invited to submit your poster proposal (in the form either of a ready-to-go poster or of an extended abstract) using the Submit your poster proposal button. All accepted posters will be presented virtually during the workshop and posted on the website. Authors will also be encouraged to record a 3 min video that will be made public prior to the poster session.
Please check the instructions for authors for more details about the expected content and format.
Schedule (to be confirmed). Time is given in UTC+1 (local time in Toulouse).
|January 14, 2021||14:45–15:00||Organization team||Opening remarks|
|15:00–15:50||Guillaume Soudain||The Challenges of Certifying AI Solutions in Aviation|
|15:50–16:30||TBD||Generalization guarantees for neural networks|
|16:30–16:45||Virtual coffee break|
|17:30–18:10||Aaditya Ramdas (TBC)||Distribution-free Uncertainty Quantification for Black-box ML Algorithms|
|18:10–19:00||Group discussions||Which Statistical Guarantees Should we Develop for AI Certification Purposes?|
|January 15, 2021||14:00–14:40||Antoine Miné||Abstract Interpretation for Verification of Critical Embedded Software|
|14:40–15:20||Martin Vechev||Abstract Interpretation for Certification of Neural Networks|
|15:20–15:30||Virtual coffee break|
|15:30–16:20||Group discussions||Which Formal Methods Should We Develop for AI Certification Purposes?|
|16:20–17:00||Mioara Joldes||Rigorous Numerics: A Short Introduction and Recent Challenges|
|17:00–17:40||Matthieu Cord||Explainability of Vision-based Autonomous Driving Systems: Recent Work and Challenges|
|17:40–18:00||Sébastien Gerchinovitz||Summary of Future Research Directions|
Registration to the workshop is free but mandatory. Please click the above button.
This workshop is organized by the DEEL project