Introduction

Robustness is widely understood as the property of some method, algorithm, or system to only decrease gradually in performance when assumptions about its input are decreasingly met. This renders robustness to be a crucial property for dependable and trustworthy applications of AI in open-world environments, in particular in high-stake applications in which human well-being is at risk. However, the usual definition of robustness raises several questions, including:

Depending on the respective application area and technique considered, various approaches have been taken to measure or benchmark performance and abnormality of input characteristics. Sometimes, we may be facing unknown requirements on input data and only experiments reveal much later that an approach is not robust (1-pixel-attacks on CNN-based object classification being one infamous example).

There has been a lot of progress in AI over the past few years, with many successful examples in perception and reasoning, which has encouraged the integration of the resulting technologies into important and high-stakes real-world applications such as autonomous mobile systems (e.g., self-driving cars, autonomous drones, service robots) automated surgical assistants, electrical grid management systems, control of critical infrastructure, to name a few. However, for such an integration to constitute a beneficial socio-technical system, safety and reliability are key, and robustness is essential to avert potential catastrophic events due to unconsidered phenomena or situations. The aim of this workshop is to bring together researchers from basic or applied AI across all sub-fields of AI to discuss approaches and challenges for developing robust AI. In particular, we envisage a dialogue between the Machine Learning and the Symbolic AI communities for the benefit of critical real-world applications. Our aim is to foster exchange between the various AI sub-fields present at KI and to discuss future research directions.

Go back to the top

Call For Papers

Robustness refers to capability of coping with unforeseen phenomena or situations. Gearing AI towards robustness has always been an aim for open-world AI, and it becomes a pressing requirement as AI makes its way into control of high-stake applications. Robustness is addressed in many sub-fields of AI using various working definitions, and various measures. This workshop aims to bring together researchers from all sub-fields of AI working on robust methods.

Topics

In this workshop, we invite the research community in Artificial Intelligence to submit position statements and technical works related to the theme of Robust AI for High-Stakes Applications in order to develop a joint understanding of robustness in AI and to foster the exchange on robust AI. Topics of interest include:

The list above is by no means exhaustive, as the aim is to foster the debate around all aspects of the suggested theme.

Submission

Guidelines

We invite submissions of regular research papers (up to 12 pages in KI format), position papers (up to 6 pages), or abstracts of recently published papers (3 pages) on the topic of robustness. Accepted papers will be published as a collection of working papers. The workshop is also open to people who would like to attend without submitting a paper as discussion of the topic will play a major role. During the workshop, perspectives on proposing a special issue for the KI journal on robust AI will be discussed. Workshop submissions and camera-ready versions will be handled by EasyChair.

All questions about submissions should be emailed to the contect organizers.

Important Dates

Be mindful of the following dates:

Note: all deadlines are Central European Time (CET), UTC +1, Paris, Brussels, Vienna, Trier.

Go back to the top

Workshop Program

Please enter the workshop via the KI 2022 program.
13:30-13:45Welcome
13:45-14:45Keynote by Tanya Braun on the role of relational statistical AI for robustness in AI
14:45-15:00coffee break and sponatenous interactions
15:00-15:30Gesina Schwalbe, Christian Wirth and Ute Schmid: Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings (20' + 10')
15:30-16:00Ines Rieger, Jaspar Pahl, Bettina Finzel and Ute Schmid: Regularization by Integrating Co-Occurrence Domain Knowledge for Affect Recognition (20' + 10')
16:00-16:30Discussion

Note: all times are Central European Time (CET), UTC +1, Paris, Brussels, Vienna, Trier.

Go back to the top

Organization

Organizing Committee

Prof. Dr. Ulrich Furbach is a retired Professor of Artificial Intelligence at the University of Koblenz-Landau and co-founder of wizAI solutions GmbH. He is currently leading the DFG research project Cognitive Reasoning. His research interests are logic, knowledge representation, and cognitive science. Furbach holds a PhD from the University of the Federal Armed Forces Munich and a habilitation from the Technical University Munich. He is an EurAI- and a GI-Fellow. Dr. Alexandra Kirsch works as a Freelancer and pursues her own research in the field between cognitive AI and user experience design. Her current research interests include decision making, categorization, and digital accessibility. She received her PhD and led an independent research group at TU München. Until 2017 she was Assistant Professor at the University of Tübingen. Between 2012 and 2018 she was adjunct member of the Bavarian Academy of Sciences and Humanities. Dr. Michael Sioutis is a Research Fellow with the Faculty of Information Systems and Applied Computer Sciences at the University of Bamberg, Germany. He received his PhD degree from Artois University, France, in 2017, and has been a Postdoc at Örebro University, Sweden, from May 2017 to December 2018, and Aalto University, Finland, during 2019. His general interests lie in artificial intelligence, data mining, and semantic web technologies. Prof. Diedrich Wolter is Professor of Smart Environments at the University of Bamberg, Germany. Currently, he leads a BMBF project on dependable AI. His research interests are knowledge representation and reasoning, and autonomous systems interaction with people and the real world. He coordinates the BamBirds team, which seeks to develop an autonomous agent for the AI Birds competition that can robustly play physical simulation games.

Program Committee

Christian Ledigexplainable MLU Bamberg
Jae Hee Leeknowledge technologyU Hamburg
Ute Schmidcognitive systemsU Bamberg
Zhiguo Longqualitative representation and reasoningSouthwest Jiaotong University, China
Jussi Rintanenartificial intelligence and software systemsU Aaalto, Finland
Franz Wottawasoftware engineeringTU Graz, Austria
Sebastian Wredecognition and roboticsCoR-Lab, U Bielefeld
Kristina Yordanovacognitive methods for situation-aware assistive systemsU Rostock

Go back to the top

Venue

The workshop will take place at the main campus (Campus I), Universitätsring 15, 54296 Trier, Germany.

Go back to the top