Invited Speakers

Luc De Raedt

KU Leuven, Belgium - luc.deraedt@kuleuven.be


Prof. Luc De Raedt is full professor at the Department of Computer Science, KU Leuven, and director of Leuven.AI, the newly founded KU Leuven Institute for AI. He is a guestprofessor at Ă–rebro University in the Wallenberg AI, Autonomous Systems and Software Program. He received his PhD in Computer Science from KU Leuven (1991), and was full professor (C4) and Chair of Machine Learning at the Albert-Ludwigs-University Freiburg, Germany (1999-2006). His research interests are in Artificial Intelligence, Machine Learning and Data Mining, as well as their applications. He is well known for his contributions in the areas of learning and reasoning, in particular, for his work on probabilistic and inductive programming. He co-chaired important conferences such as ECMLPKDD 2001 and ICML 2005 (the European and International Conferences on Machine Learning), ECAI 2012 and will chair IJCAI in 2022 (the European and international AI conferences). He is on the editorial board of Artificial Intelligence, Machine Learning and the Journal of Machine Learning Research. He is a EurAI and AAAI fellow, and received and ERC Advanced Grant in 2015.

Talk Title: "From Probabilistic Logics to Neuro-Symbolic Artificial Intelligence"

Abstract: A central challenge to contemporary AI is to integrate learning and reasoning. The integration of learning and reasoning has been studied for decades already in the fields of statistical relational artificial intelligence and probabilistic programming. StarAI has focussed on unifying logic and probability, the two key frameworks for reasoning, and has extended this probabilistic logics machine learning principles. I will argue that StarAI and Probabilistic Logics form an ideal basis for developing neuro-symbolic artificial intelligence techniques. Thus neuro-symbolic computation = StarAI + Neural Networks. Many parallels will be drawn between these two fields and will be illustrated using the Deep Probabilistic Logic Programming language DeepProbLog.

Esra Erdem

Sabanci University, Turkey - esra.erdem@sabanciuniv.edu

Esra Erdem is an associate professor in computer science and engineering at Sabanci University. She received her Ph.D. in computer sciences at the University of Texas at Austin (2002), and carried out postdoctoral research at the University of Toronto and Vienna University of Technology from 2002 to 2006. Her research is in the area of artificial intelligence, in particular, the mathematical foundations of knowledge representation and reasoning, and their applications to various domains, including robotics, bioinformatics, logistics, and economics. She was a general co-chair of ICLP 2013, and is a program co-chair of ICLP 2019 and KR 2020, and the general chair of KR 2021. She served on the editorial board of JAIR, and is a member of the editorial board of TPLP, the ALP executive committee, and the KR steering committee. She received the Young Scientist award by the Science Academy, and the Women Leader of Innovative and Disruptive Technologies award by Microsoft.

Talk Title: Applications of Answer Set Programming where Theory meets Practice

Abstract: We have been investigating applications of Answer Set Programming (ASP) in various domains, ranging from historical linguistics and bioinformatics to economics and robotics. In these applications, theory meets practice around challenging computational problems, and they all start a journey towards benefiting science and life. ASP plays an important role in this journey, sometimes as a declarative programming paradigm to solve hard combinatorial search problems (e.g., phylogeny reconstruction for Indo-European languages, multi-agent path finding in autonomous warehouses, matching problems in economics), and sometimes as a knowledge representation paradigm to allow deep reasoning and inferences over incomplete heterogeneous knowledge and beliefs of agents (e.g., hybrid planning for robotic manipulation, diagnostic reasoning in cognitive factories, explanation generation for biomedical queries). In this talk, we will share our experiences from such different applications of ASP, and discuss its role and usefulness from different perspectives.

Joao Marquez-Silva

ANITI, University of Toulouse, France - jpmarquessilva@gmail.com


Joao Marques-Silva is a Research Chair of the Artificial and Natural Intelligence Toulouse Institute (ANITI). Before joining ANITI, he was affiliated with the University of Lisbon in Portugal, University College Dublin in Ireland, and University of Southampton in the United Kingdom. Dr. Marques-Silva is a Fellow of the IEEE and was a recipient of the 2009 CAV Award for fundamental contributions to the development of high-performance Boolean satisfiability solvers.

Talk Title: "Formal Reasoning Methods for Explainability in Machine Learning"

Abstract: The forecasted success of machine learning (ML) hinges on systems that are robust in their operation and that can be trusted. This talk overviews recent efforts on applying automated reasoning tools in explaining non-interpretable (black-box) ML models, for assessing heuristic explanations, but also for learning interpretable ML models. Concretely, the talk overviews existing approaches for producing rigorous explanations of black-box models, and assesses the quality of widely used heuristic approaches for computing explanations. In addition, the talk discusses properties of rigorous explanations. Finally, the talk briefly overviews ongoing work on learning interpretable ML models.

Francesca Rossi

T.J. Watson IBM Research Lab, USA - francesca.rossi2@ibm.com


Francesca Rossi is an IBM fellow, the IBM AI Ethics Global Leader, and a Distinguished Research Staff Member at the T.J. Watson IBM Research Lab. She has been a professor of computer science at the University of Padova for 20 years before joining IBM. Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. On these topics, she has published over 200 scientific articles in journals and conference proceedings, and as book chapters. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI), an executive councillor of AAAI, and the Editor in Chief of the Journal of AI Research. She is a member of the scientific advisory board of the Future of Life Institute (Cambridge, USA) and a deputy director of the Leverhulme Centre for the Future of Intelligence (Cambridge, UK). She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners. She is a member of the European Commission High Level Expert Group on AI and the general chair of the AAAI 2020 conference. She co-leads the internal IBM AI Ethics board.

Talk Title: "When Is It Morally Acceptable to Break the Rules? A Preference-Based Approach" - This is the "EurAI talk"

Abstract: Humans make moral judgements about their own actions and the actions of others. Sometimes they make these judgements by following a utilitarian approach, other times they follow simple deontological rules, and yet at other times they find (or simulate) an agreement among the relevant parties. To build machines that behave similarly to humans, or that can work effectively with humans, we must understand how humans make moral judgements. This includes when to use a specific moral approach and how to appropriately switch among the various approaches. We investigate how, why, and when humans decide to break some rules. We study a suite of hypothetical scenarios that describes a person who might break a well established norm and/or rule, and asked human participants to provide a moral judgement of this action. In order to effectively embed moral reasoning capabilities into a machine we model the human moral judgments made in these experiments via a generalization of CP-nets, a common preference formalism in computer science. We describe what is needed to both model the scenarios and the moral decisions, which requires an extension of existing computational models. We discuss how this leads to future research directions in the areas of preference reasoning, planning, and value alignment.

Marina De Vos

University of Bath, UK - M.D.Vos@bath.ac.uk


Marina De Vos is Senior Lecturer at the University of Bath, UK. After completed her PhD in Computer Science and the Vrije Universiteit Brussel, Belgium, Marina De Vos joined the Department of Computer Science at the University of Bath, UK. She is Senior Lecturer (Associate Professor.) and recently became the Director of Training for the newly funded Centre for Doctoral Training in Accountable, Responsible and Transparent Artificial Intelligence. Her research area is knowledge representation and reasoning, using answer set programming to model human/agent decision-making. Currently, her work focuses on the modelling, the explanation and the verification of normative and policy-based reasoning in the areas of legal and socio-technical systems. In these systems participants, human and computational agents' behaviour is guided by a set of norms/policies that describe expected behaviour. Non-compliance can be monitored and penalised while compliance is rewarded. Through a formal model, and corresponding implementation, the behaviour of the entire system can be proven and explained. Beyond normative modelling, she is interested in the software development for AI systems in general and logic-based systems more specifically and their use in wider society.

Talk Title: "Norms, Policy and Laws: Modelling, Compliance and Violation" - This is the "Woman in LP Talk"

Abstract: Norms, policy and laws all focus on describing desired behaviour of actors, whether they are humans, agents or processes. The actual behaviour of the actor can then be checked against this and compliance or violation potentially respectively rewarded or penalised. This talk explores the use of answer set programming in modelling the modelling, verification and compliance of these norms, policy and laws, both at the modelling stage but also in a running system. We demonstrate how this technology can be used to detect inconsistencies between sets of norms/policies/norms and how they could possibly be resolved.