Titre : Can critical agents improve knowledge acquisition speed?

Sujet proposé dans : M2 MOSIG, Projet --- M2 MSIAM, Projet --- M2R Informatique, Projet

Responsable(s) :

Mots-clés : Agents, Population, Cultural evolution, ontology evolution, knowledge representation
Durée du projet : 5 month (possibility to continue in PhD)
Nombre maximal d'étudiants : 1
Places disponibles : 1
Interrogation effectuée le : 26 avril 2024, à 09 heures 04


Description

We observed that consensus within a population of agents is an obstacle for them to assimilate new knowledge. Therefore we would like to design mechanisms allowing agents to preserve divergent knowledge and observe its effect on knowledge acquisition speed.

Cultural evolution is the application of evolution theory to culture [Messoudi 2006]. It may be addressed through multi-agent simulations [Steels 2012]. Experimental cultural evolution provides a population of agents with interaction games that are played randomly. In reaction to the outcome of such games, agents adapt their knowledge. It is possible to test hypotheses by precisely crafting the rules used by agents in games and observing the consequences.

Our ambition is to understand and develop general mechanisms by which a society evolves its knowledge. For that purpose, we adapted this approach to the evolution of the way agents represent knowledge [Euzenat, 2014, 2017; Anslow & Rovatsos, 2015; Chocron & Schorlemmer, 2016]. We showed that cultural repair is able to converge towards successful communication and improves the objective correctness of alignments.

We tested games involving populations of agents characterised by the use of the same ontology. Each agent will individually improve the alignments between its ontology and those of agents with different ontologies, as in [Euzenat, 2014, 2017]. However, from time to time, agents will put their alignment together and define consensus alignments. This combines in the same setting, agents learning by experience and by exchanging knowledge.

We tested this approach with agents voting for the correspondences they held true and starting again with the consensus. As a results, these agents achieved the same quality of alignments as individual agents, but did it more slowly.

This came as a surprise because it was previously reported that similar protocols would lead agents to converge faster [Steels 2012]. There are explanations to that, the more compelling being that in the reported experiments (1) knowledge integration was non-discrete (weights were adjusted instead of correspondence chosen), and (2) agents integrated what they learnt, not what they believed. Indeed, our agents started with random alignments and voting with those was on little value and moreover would lead knowledge learnt by agents to be discarded.

The goal of this internship is to design mechanisms by which a population of agents could better take advantage of learnt knowledge. This may be achieved by assigning higher value to correspondences that agents have learnt, or that have been tested. This may also be achieved by having critical agents who participate to the consensus but preserve and use the knowledge they have learnt so that they do not loose information. It is expected that, as more agents learn the same thing, their view will prevail.

The question of the speed of convergence of such solutions remain open and it is the goal of this work to shed light on this question through the design of agents and experiments to establish this.

This work is part of an ambitious program towards what we call cultural knowledge evolution. It is part of the MIAI Knowledge communication and evolution chair and as such may lead to a PhD thesis.

References:

[Anslow & Rovatsos, 2015] Michael Anslow, Michael Rovatsos, Aligning experientially grounded ontologies using language games, Proc. 4th international workshop on graph structure for knowledge representation, Buenos Aires (AR), pp15-31, 2015 [DOI:10.1007/978-3-319-28702-7_2]
[Chocron & Schorlemmer, 2016] Paula Chocron, Marco Schorlemmer, Attuning ontology alignments to semantically heterogeneous multi-agent interactions, Proc. 22nd European Conference on Artificial Intelligence, Der Haague (NL), pp871-879, 2016 [DOI:10.3233/978-1-61499-672-9-871]
[Euzenat & Shvaiko, 2013] Jérôme Euzenat, Pavel Shvaiko, Ontology matching, 2nd edition, Springer-Verlag, Heildelberg (DE), 2013
[Euzenat, 2014] Jérôme Euzenat, First experiments in cultural alignment repair (extended version), in: Proc. 3rd ESWC workshop on Debugging ontologies and ontology mappings (WoDOOM), Hersounisos (GR), LNCS 8798:115-130, 2014 ftp://ftp.inrialpes.fr/pub/exmo/publications/euzenat2014c.pdf
[Euzenat, 2017] Jérôme Euzenat, Communication-driven ontology alignment repair and expansion, in: Proc. 26th International joint conference on artificial intelligence (IJCAI), Melbourne (AU), pp185-191, 2017 ftp://ftp.inrialpes.fr/pub/moex/papers/euzenat2017a.pdf
[Mesoudi 2006] Alex Mesoudi, Andrew Whiten, Kevin Laland, Towards a unfied science of cultural evolution, Behavioral and brain sciences 29(4):329–383, 2006 http://alexmesoudi.com/s/Mesoudi_Whiten_Laland_BBS_2006.pdf
[Steels, 2012] Luc Steels (ed.), Experiments in cultural language evolution, John Benjamins, Amsterdam (NL), 2012

Links: