Invited Speakers

The chairs of ISIPTA ’17 and ECSQARU 2017 are pleased to announce the following invited talks. Click on the cells to see the abstracts and the links to the slides of the talks and the bio of the speakers.

Leila Amgoud. Evaluation Methods of Arguments: Current Trends and Challenges.

Evaluation Methods of Arguments: Current Trends and Challenges

An Invited Talk by Leila Amgoud (slides)

Session chair: Odile Papini.

Argumentation is a reasoning process based on the justification of conclusions by arguments. Due to its explanatory power, it has become a hot topic in Artificial Intelligence. It is used for making decisions under uncertainty, learning rules, modeling different types of dialogs, and more importantly for reasoning about inconsistent information. Hence, an argument’s conclusion may have different natures: a statement that is true or false, an action to do, a goal to pursue, etc. Furthermore, it has generally an intrinsic strength, which may represent different issues (the certainty degree of its reason, the importance of the value it promotes if any, the reliability of its source, …). Whatever its intrinsic strength (strong or weak), an argument may be weakened by other arguments (called attackers), and may be strengthened by others (called supporters). The overall acceptability of arguments needs then to be evaluated. Several evaluation methods, called semantics, were proposed for that purpose. In this talk, we show that they can be partitioned into three classes (extension semantics, gradual semantics, ranking semantics), which answer respectively to following questions:

  1. What are the coalitions of arguments?
  2. What is the overall strength of an argument?
  3. How arguments can be rank-ordered from the most to the least acceptable ones?

We analyze the three classes against a set of rationality principles, and show that extension semantics are fundamentally different from the two other classes. This means that in concrete applications, they lead to different results. Namely, in case of reasoning with inconsistent information, extension semantics follow the same line of research as well-known syntactic approaches for handling inconsistency, while the two other classes lead to novel and powerful ranking logics. We argue that there is no universal evaluation method. The choice of a suitable method depends on the application at hand. Finally, we point out some challenges ahead.

leilaLeila Amgoud is a senior CNRS researcher at IRIT lab, Toulouse, France. She earned her PhD in computer science from University of Toulouse in 1999. Her research interests include argumentation-based reasoning, nonmonotonic reasoning, inconsistency management, and modeling interactions between autonomous agents (negotiation, persuasion). She serves regularly as program committee member of multiple Artificial Intelligence conferences, and is in the editorial board of the Argument and Computation Journal. She published more than 120 peer-reviewed papers. She is ECCAI fellow since 2014.

to top

Alessio Benavoli. Bayes + Hilbert = Quantum Mechanics.

Bayes + Hilbert = Quantum Mechanics

An Invited Talk by Alessio Benavoli (slides)

Session chair: Alessandro Antonucci.

Quantum mechanics (QM) is based on four main axioms, which were derived after a long process of trial and error. The motivations for the axioms are not always clear and even to experts the basic axioms of QM often appear counter-intuitive. In a recent paper, we have shown that: (i) It is possible to derive quantum mechanics from a single principle of self-consistency or, in other words, that QM laws of Nature are logically consistent; (ii) QM is just the Bayesian theory generalised to the complex Hilbert space. In particular, we have considered the problem of gambling on a quantum experiment and enforced rational behaviour by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In our quantum setting, they yield the Bayesian theory generalised to the space of Hermitian matrices. This very theory is QM: in fact, we have derived all its four postulates from the generalised Bayesian theory. This implies that QM is self-consistent. It also leads us to reinterpret the main operations in quantum mechanics as probability rules: Bayes’ rule (measurement), marginalisation (partial tracing), independence (tensor product). To say it with a slogan, we have obtained that quantum mechanics is the Bayesian theory in the complex numbers.

alessioAlessio Benavoli received his degrees in Computer and Control Engineering from the University of Firenze, Italy: the Ph.D in 2008 and the M.S. degree in 2004. From April 2007 to May 2008, he worked for the international company SELEX-Sistemi Integrati as system analyst. Currently, he is working as Senior researcher at the Dalle Molle Institute for Artificial Intelligence (IDSIA) in Lugano, Switzerland. His research interests are in the areas of imprecise probabilities, logic of science, Bayesian nonparametrics, decision-making under uncertainty and control theory. He has co-authored about 70 peer-reviewed publications in top conferences and journals including Physical Review A, Automatica, IEEE Transaction on Automatic Control, Journal of Machine Learning Research, Machine Learning and Statistics.

to top

Jim Berger. Encounters with Imprecise Probabilities.

Encounters with Imprecise Probabilities

An Invited Talk by Jim Berger (slides)

Session chair: Giorgio Corani.

Although I have not formally done research in imprecise probability over the last twenty years, imprecise probability was central to much of my research in other areas. This talk will review some of these encounters with imprecise probability, taking examples from four areas:

  1. Using probabilities of a “higher type” (I.J. Good’s phrase), with an application to genome-wide association studies.
  2. Robust Bayesian bounds, with an application to conversion of p-values to odds.
  3. Importance (and non-importance) of dependencies in imprecise probabilities.
  4. Imprecise probabilities arising from model bias, with examples from both statistical and physical modeling.

jimJim Berger is the Arts and Sciences Professor of Statistics at Duke University. His current research interests include Bayesian model uncertainty and uncertainty quantification for complex computer models. Berger was president of the Institute of Mathematical Statistics from 1995-1996 and of the International Society for Bayesian Analysis during 2004. He was the founding director of the Statistical and Applied Mathematical Sciences Institute, serving from 2002-2010. He was co-editor of the Annals of Statistics from 1998-2000 and was a founding editor of the Journal on Uncertainty Quantification from 2012-2015. Berger is a Fellow of the ASA and the IMS and has received Guggenheim and Sloan Fellowships. He received the Committee of Presidents of Statistical Societies ‘President’s Award’ in 1985, was the COPSS Fisher Lecturer in 2001 and the Wald Lecturer of the IMS in 2007. He was elected as a foreign member of the Spanish Real Academia de Ciencias in 2002, elected to the USA National Academy of Sciences in 2003, was awarded an honorary Doctor of Science degree from Purdue University in 2004, and became an Honorary Professor at East China Normal University in 2011.

to top

Didier Dubois. Symbolic and Quantitative Representations of Uncertainty: an Overview.

Symbolic and Quantitative Representations of Uncertainty: an Overview.

An Invited Talk by Didier Dubois (slides)

Session chair: Ines Couso.

The distinction between aleatory and epistemic uncertainty is more and more acknowledged to-date, and the idea that they should not be handled in the same way becomes more and more accepted. Aleatory uncertainty refers to a summarized description of natural phenomena by means of frequencies of occurrence, which justifies a numerical approach based on probability theory. In contrast, epistemic uncertainty stems from a lack of information, and describes the state of knowledge of an agent. It seems to be basically qualitative, and is captured by sets of possible worlds of states of nature, one of which is the actual one. In other words, beliefs induced by aleatory uncertainty are naturally quantitative, while this is less obvious for beliefs stemming from epistemic uncertainty for which there are various approaches ranging from qualitative ones like three-valued logics and modal logics to quantitative ones like subjective probabilities. The qualitative approaches can be refined by considering degrees of beliefs on finite value scales or yet by means of confidence relations. Moreover aleatory and epistemic uncertainty may come together, and leads to the use of upper and lower probabilities.

In this talk, we review the various approaches to the representations of uncertainty, by showing similarities between quantitative and qualitative approaches. We give a general definition of an epistemic state or an information item, as defining a set of possible values, a set of plausible ones, a plausibility ordering on events. Moreover, epistemic states must be compared in terms of informativeness.

The basic mathematical tool for representing uncertainty is the monotonic set-function, called capacity of fuzzy measure. In the quantitative case, the most general model is based on convex probability sets, that is, capacities that stand for lower probabilities. In the qualitative case, the simplest non-Boolean approach is based on possibility and necessity measures. It is shown that possibility theory plays in the qualitative setting a role similar to the one of probability theory in the quantitative setting. Just as a numerical capacity can, under some conditions, encode a family of probability distributions, a qualitative capacity always encodes a family of possibility distributions. For decision purposes, Sugeno integral is similar to Choquet integral.

Logical reasoning under incomplete information can be achieved by means of a simplified version of epistemic logic whose semantics is in terms of possibility theory, in contrast with probabilistic reasoning. It can be extended to reasoning with degrees of beliefs using generalised possibilistic logic. Various ways of defining logics of uncertainty are outlined, absolute, comparative, or fuzzy.

Finally we discuss the issue of uncertainty due to conflicting items of information. In the numerical setting this is naturally captured by the theory of evidence, that essentially models unreliable testimonies and their fusion. A general approach to the fusion of information items is outlined, proposing merging axioms that apply to quantitative and qualitative items of information. Finally, we show that using Boolean valued capacities, we can faithfully represent conflicting information coming from several sources. In this setting, necessity functions represent incomplete information while possibility measures represent precise but conflicting pieces of information.

duboisDidier Dubois is a Research Advisor at IRIT, the Computer Science Department of Paul Sabatier University in Toulouse, France and belongs to the French National Centre for Scientific Resarch (CNRS). He holds a Doctorate in Engineering from ENSAE, Toulouse (1977), a Doctorat d’Etat from Grenoble University (1983) and an Honorary Doctorate from the Faculté Polytechnique de Mons, Belgium (1997). He is the co-author, with Henri Prade, of two books on fuzzy sets and possibility theory, and 12 edited volumes on uncertain reasoning and fuzzy sets. Also with Henri Prade, he coordinated the HANDBOOK of FUZZY SETS series published by Kluwer (7 volumes, 1998-2000, 2 of which he co-edited). It includes the book Fundamentals of Fuzzy Sets , edited again with H. Prade (Kluwer, Boston, 2000). He has contributed about 200 technical journal papers on uncertainty theories and applications. In 2002 he received the Pioneer Award of the IEEE Neural Network Society. Since 1999, he has been co-Editor-in -Chief of Fuzzy Sets and Systems. He has been an Associate Editor of the IEEE Transactions on Fuzzy Systems, of which he is now an Advisory Editor. He is a member of the Editorial Board of several technical journals, such as the International Journal on Approximate Reasoning, and Information Sciences among others. He is a former president of the International Fuzzy Systems Association (1995-1997). His topics of interest range from Artificial Intelligence to Operations Research and Decision Sciences, with emphasis on the modelling, representation and processing of imprecise and uncertain information in reasoning, problem-solving tasks and risk analysis.

to top

Eyke Hüllermeier. Learning from Imprecise Data.

Learning from Imprecise Data

An Invited Talk by Eyke Hüllermeier (slides)

Session chair: Sebastien Destercke.

This talk addresses the problem of learning from imprecise data. Although it has been studied in statistics and various other fields for quite a while, this problem received renewed interest in the realm of machine learning more recently. In particular, the framework of superset learning will be discussed, a generalization of standard supervised learning in which training instances are labeled with a superset of the actual outcomes. Thus, superset learning can be seen as a specific type of weakly supervised learning, in which training examples are imprecise or ambiguous. We introduce a generic approach to superset learning, which is motivated by the idea of performing model identification and “data disambiguation” simultaneously. This idea is realized by means of a generalized risk minimization approach, using an extended loss function that compares precise predictions with set-valued observations. Building on this approach, we furthermore elaborate on the idea of “data imprecisiation”: By deliberately turning precise training data into imprecise data, it becomes possible to modulate the influence of individual examples on the process of model induction. In other words, data imprecisiation offers an alternative way of instance weighting. Interestingly, several existing machine learning methods, such as support vector regression or semi-supervised support vector classification, are recovered as special cases of this approach. Besides, promising new methods can be derived in a natural way, and examples of such methods will be shown for problems such as classification, regression, and label ranking.

eykeEyke Hüllermeier is a full professor in the Department of Computer Science at Paderborn University, Germany, where he heads the Intelligent Systems Group. He graduated in mathematics and business computing, received his PhD in computer science from the University of Paderborn in 1997, and a Habilitation degree in 2002. Prior to returning to Paderborn in 2014, he spent two years as a Marie Curie fellow at the Institut de Recherche en Informatique de Toulouse (IRIT) in France (1998-2000) and held professorships at the Universities of Marburg (2002-04), Dortmund (2004), Magdeburg (2005-06) and again Marburg (2007-14). His research interests are centered around methods and theoretical foundations of intelligent systems, with a specific focus on machine learning and reasoning under uncertainty. He has published more than 200 articles on these topics in top-tier journals and major international conferences, and several of his contributions have been recognized with scientific awards. Professor Hüllermeier is Co-Editor-in-Chief of Fuzzy Sets and Systems, one of the leading journals in the field of Computational Intelligence, and serves on the editorial board of several other journals, including Machine Learning, the International Journal of Approximate Reasoning, and the IEEE Transactions on Fuzzy Systems. He is a coordinator of the EUSFLAT working group on Machine Learning and Data Mining and head of the IEEE CIS Task Force on Machine Learning.

to top