KU Leuven, Belgium - luc.deraedt@kuleuven.be
Abstract: A central challenge to contemporary AI is to integrate learning and reasoning. The integration of learning and reasoning has been studied for decades already in the fields of statistical relational artificial intelligence and probabilistic programming. StarAI has focussed on unifying logic and probability, the two key frameworks for reasoning, and has extended this probabilistic logics machine learning principles. I will argue that StarAI and Probabilistic Logics form an ideal basis for developing neuro-symbolic artificial intelligence techniques. Thus neuro-symbolic computation = StarAI + Neural Networks. Many parallels will be drawn between these two fields and will be illustrated using the Deep Probabilistic Logic Programming language DeepProbLog.
Sabanci University, Turkey - esra.erdem@sabanciuniv.edu
Esra Erdem is an associate professor in computer science and engineering at Sabanci University. She received her Ph.D. in computer sciences at the University of Texas at Austin (2002), and carried out postdoctoral research at the University of Toronto and Vienna University of Technology from 2002 to 2006. Her research is in the area of artificial intelligence, in particular, the mathematical foundations of knowledge representation and reasoning, and their applications to various domains, including robotics, bioinformatics, logistics, and economics. She was a general co-chair of ICLP 2013, and is a program co-chair of ICLP 2019 and KR 2020, and the general chair of KR 2021. She served on the editorial board of JAIR, and is a member of the editorial board of TPLP, the ALP executive committee, and the KR steering committee. She received the Young Scientist award by the Science Academy, and the Women Leader of Innovative and Disruptive Technologies award by Microsoft.Abstract: We have been investigating applications of Answer Set Programming (ASP) in various domains, ranging from historical linguistics and bioinformatics to economics and robotics. In these applications, theory meets practice around challenging computational problems, and they all start a journey towards benefiting science and life. ASP plays an important role in this journey, sometimes as a declarative programming paradigm to solve hard combinatorial search problems (e.g., phylogeny reconstruction for Indo-European languages, multi-agent path finding in autonomous warehouses, matching problems in economics), and sometimes as a knowledge representation paradigm to allow deep reasoning and inferences over incomplete heterogeneous knowledge and beliefs of agents (e.g., hybrid planning for robotic manipulation, diagnostic reasoning in cognitive factories, explanation generation for biomedical queries). In this talk, we will share our experiences from such different applications of ASP, and discuss its role and usefulness from different perspectives.
ANITI, University of Toulouse, France - jpmarquessilva@gmail.com
Abstract: The forecasted success of machine learning (ML) hinges on systems that are robust in their operation and that can be trusted. This talk overviews recent efforts on applying automated reasoning tools in explaining non-interpretable (black-box) ML models, for assessing heuristic explanations, but also for learning interpretable ML models. Concretely, the talk overviews existing approaches for producing rigorous explanations of black-box models, and assesses the quality of widely used heuristic approaches for computing explanations. In addition, the talk discusses properties of rigorous explanations. Finally, the talk briefly overviews ongoing work on learning interpretable ML models.
T.J. Watson IBM Research Lab, USA - francesca.rossi2@ibm.com
Abstract: Humans make moral judgements about their own actions and the actions of others. Sometimes they make these judgements by following a utilitarian approach, other times they follow simple deontological rules, and yet at other times they find (or simulate) an agreement among the relevant parties. To build machines that behave similarly to humans, or that can work effectively with humans, we must understand how humans make moral judgements. This includes when to use a specific moral approach and how to appropriately switch among the various approaches. We investigate how, why, and when humans decide to break some rules. We study a suite of hypothetical scenarios that describes a person who might break a well established norm and/or rule, and asked human participants to provide a moral judgement of this action. In order to effectively embed moral reasoning capabilities into a machine we model the human moral judgments made in these experiments via a generalization of CP-nets, a common preference formalism in computer science. We describe what is needed to both model the scenarios and the moral decisions, which requires an extension of existing computational models. We discuss how this leads to future research directions in the areas of preference reasoning, planning, and value alignment.
University of Bath, UK - M.D.Vos@bath.ac.uk
Abstract: Norms, policy and laws all focus on describing desired behaviour of actors, whether they are humans, agents or processes. The actual behaviour of the actor can then be checked against this and compliance or violation potentially respectively rewarded or penalised. This talk explores the use of answer set programming in modelling the modelling, verification and compliance of these norms, policy and laws, both at the modelling stage but also in a running system. We demonstrate how this technology can be used to detect inconsistencies between sets of norms/policies/norms and how they could possibly be resolved.