Bandeau image

DATAIA DAYS ON SAFETY & AI

DATAIA DAYS ON SAFETY & AI

Partager

twlkfbml
Chapo
First DATAIA DAYS on SAFETY & AI September 11th, 2019

SAFETY IN AI SHOULD NOT BE AN OPTION, BUT A DESIGN PRINCIPLE
Contenu
Corps de texte

Organizers: Pablo Piantanida (CentraleSupélec), Francois Terrier  (CEA List)

Co-organizer : Labex Digicosme

This DATAIA Day will explore new ideas on Artificial Intelligence (AI) and safety engineering to develop rigorous techniques for building safe and trustworthy AI autonomous systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society. Safety in AI should not be an option, but a design principle. However, there are different levels of safety and different degrees of liability, for which we face trade-offs or alternative solutions. These choices can only be analyzed if we consider both the theoretical and practical challenges of the engineering problem for AI safety. This view must cover a wide range of AI paradigms, including systems that are specific for a particular application, those that are more general, and can lead to unanticipated potential risks. We must also bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to really build, evaluate, deploy, operate and maintain AI-based systems that are truly safe.

In particular, SAFETY-DATAIA will provide a forum for thematic presentations and in-depth discussions about safe AI architectures, ML safety, safe human-machine interaction, and safety considerations in automated decision-making systems, in a way that makes AI-based systems more trustworthy. SAFETY-DATAIA  aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.

Tentative Program

9.00 am Welcome Coffee

KEYNOTE
Rob Ashmore (Fellow, UK Defence Science and Technology Laboratory (Dstl))
Safety Assurance of Autonomous Systems: Progress and Challenges

SCIENTIFIC PRESENTATIONS
Machine Learning and privacy : friends or enemies ?
Catuscia Palamidessi, Inria/ LIX

Randomization techniques for robustness to adversarial attacks
Jamal Atif (Paris-Dauphine), Cedric Gouy-Pailler (CEA List) & Rafaël Pinot (Paris-Dauphine PSL & CEA)

Formal  validation for machine learning
Zakaria Chihani, Julien Girard (CEA List) & Guillaume Charpiat, Marc Schoenauer (Inria/LRI)
 
Explainability for machine learning
Frédéric Pascal, Pablo Piantanida (CentraleSupélec)

Safety evaluation process for AI based autonomous systems
Morayo Adedjouma & Gabriel Pedroza (CEA List)

ROUNDTABLE «SAFETY OF AI, INDUSTRIAL CHALLENGES»
Moderator
Jean-Noel Patillon, Director of CEA List, AI program coordinator at CEA Tech

Guests
Julien Chiaroni, Director, Grand Défi, " Sécurisation, certification et fiabilisation de l'intelligence artificielle"
Javier Ibanez-Guzman, Autonomous Vehicle Expert (Renault)
Michaël Krajecki, Director of AI program (Agence de l’Innovation pour la Défense)
David Sadek, Vice President Research, Technology and Innovation (Thales)

6.00 pm Closure Cocktail

POSTER SESSION

Poster session for ongoing PhD and internship works

Call for poster proposals - Please send your proposals to the organizers: pablo.piantanida@centralesupelec.fr and francois.terrier@cea.fr.

Possible (but not limited) topics for posters :

* Artificial intelligence used in the field of safety and security

* Safety constraints and rules in decision-making systems

* Uncertainty in AI and its effect on safety

* Safety in AI-based system architectures 

* V&V of AI components and AI based systems

* Continuous V&V and predictability of AI safety properties

* Runtime monitoring and (self-)adaptation of AI safety

* Accountability, responsibility and liability of AI-based systems

* Effect of uncertainty in AI safety

* Avoiding negative side effects in AI-based systems

* Role and effectiveness of oversight: corrigibility and interruptibility

* Loss of values and the catastrophic forgetting problem

* Confidence, self-esteem and the distributional shift problem

* Safety of Artificial General Intelligence (AGI) systems and the role of generality

* Reward hacking and training corruption

* Self-explanation, self-criticism and the transparency problem

* Human-machine interaction safety

* Regulating AI-based systems: safety standards and certification

* Evaluation platforms for AI safety 

* Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

Registrations free but mandatory within the limit of available seats

Register Here