Joris Hulstijn: Computational Accountability

   2021. április 1. 10:00 - 2021. április 1. 12:00

Centre for Social Sciences ELKH

Hungarian Academy of Sciences Centre of Excellence

Artificial Intelligence National Laboratory (MILAB)

Online Research Seminar Series: AI and Law

 

The aim of the seminar series is to continue the discourse on the legal and regulatory implications of AI and related technologies in the European and the global legal, economic and social space. It brings together scholars and practitioners with a distinct multidisciplinary orientation covering the technology, its politics, policy-making and regulation, the law, and the relevant values. The series aims to foster a critical understanding of technological changes and a critical analysis of ongoing and future policy, regulatory and legal developments.

 

The next research seminar covers the following topic:

 

Computational Accountability

 

Speaker: Joris Hulstijn, Tilburg University, Tilburg School of Economics and Management

 

Time: Thursday, 1 April 2021, 10:00 CET

 

Computer systems that are based on artificial intelligence, and specifically on machine learning techniques, are more and more pervasive. Based on large amounts of data such systems take decisions that matter. For example, they select our news, they decide to grant a loan or to apply a discount based on a customer profile. Still, some person (human or legal) remains responsible for the decisions being made by the system. Looking back, that person is also accountable for the outcomes, and may even be liable in case of damages. That puts constraints on the design of autonomous systems and on the governance models that surround these decisions.

 

Can we design autonomous systems in such a way that all decisions can be justified later, and the person who is ultimately responsible, can be held accountable?

 

In this talk, we will analyse the problem of computational accountability along two lines. First, we will discuss system design. For all decisions, evidence must be collected about the decision rule that was used, and the data that was applied. However, many algorithms are not understandable for humans. Hence the need for explainable AI. Alternatively, we must prove that the system is set up in such a way that it can only use valid algorithms and reliable data sets, which are appropriate for the decision task. Second, we will discuss the governance model for autonomous systems. What are the standards and procedures, as well as the roles and responsibilities to make sure that only valid algorithms and reliable data are being used in decision making? The discussion will be illustrated by practical examples.

 

Find more information about this event here: https://file.tk.mta.hu/index.php/s/8P4asP9DTsQDMc4

 

Registration: https://forms.gle/DwJrvChPQxycmhpa9

 

The seminar will be broadcasted via Zoom application. Participation is subject to prior registration. The Zoom link of the event will be sent to registered participants via email.