Bandeau image

DATAIA DAYS ON SAFETY & AI

DATAIA DAYS ON SAFETY & AI

Partager

twlkfbml
Chapo
First DATAIA DAYS on SAFETY & AI September 11th, 2019

SAFETY IN AI SHOULD NOT BE AN OPTION, BUT A DESIGN PRINCIPLE
Contenu
Corps de texte

Organizers: Pablo Piantanida (CentraleSupélec), Francois Terrier  (CEA List)

This DATAIA Day will explore new ideas on Artificial Intelligence (AI) and safety engineering to develop rigorous techniques for building safe and trustworthy AI autonomous systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society. Safety in AI should not be an option, but a design principle. However, there are different levels of safety and different degrees of liability, for which we face trade-offs or alternative solutions. These choices can only be analyzed if we consider both the theoretical and practical challenges of the engineering problem for AI safety. This view must cover a wide range of AI paradigms, including systems that are specific for a particular application, those that are more general, and can lead to unanticipated potential risks. We must also bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to really build, evaluate, deploy, operate and maintain AI-based systems that are truly safe.

In particular, SAFETY-DATAIA will provide a forum for thematic presentations and in-depth discussions about safe AI architectures, ML safety, safe human-machine interaction, and safety considerations in automated decision-making systems, in a way that makes AI-based systems more trustworthy. SAFETY-DATAIA  aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.

Detailed program available soon

Poster session for ongoing PhD works

Call for poster proposals - Please send your proposals to the organizers: pablo.piantanida@centralesupelec.fr and francois.terrier@cea.fr.

Possible (but not limited) topics for PhD posters :

* Artificial intelligence used in the field of safety and security

* Safety constraints and rules in decision-making systems

* Uncertainty in AI and its effect on safety

* Safety in AI-based system architectures 

* V&V of AI components and AI based systems

* Continuous V&V and predictability of AI safety properties

* Runtime monitoring and (self-)adaptation of AI safety

* Accountability, responsibility and liability of AI-based systems

* Effect of uncertainty in AI safety

* Avoiding negative side effects in AI-based systems

* Role and effectiveness of oversight: corrigibility and interruptibility

* Loss of values and the catastrophic forgetting problem

* Confidence, self-esteem and the distributional shift problem

* Safety of Artificial General Intelligence (AGI) systems and the role of generality

* Reward hacking and training corruption

* Self-explanation, self-criticism and the transparency problem

* Human-machine interaction safety

* Regulating AI-based systems: safety standards and certification

* Evaluation platforms for AI safety 

* Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

Limited registrations (free but compulsory)

Register Here