Titre : Probability Calibration for Image Classification Systems (attributed)

Sujet proposé dans : M2 MOSIG, Projet --- M2 MSIAM, Projet --- M2R Informatique, Projet

Responsable(s) :

Mots-clés : Image cassification, probability normalization, deep learning.
Durée du projet : 3 to 5 months.
Nombre maximal d'étudiants : 1
Places disponibles : 1
Interrogation effectuée le : 24 avril 2024, à 00 heures 04


Description

Context: Deep Convolutional Neural Networks now achieve very good performance in image classification. Given a set of target concepts for which they have been trained and a set of test images, they usually output an images x concepts score matrix representing the likeliness of a target concept to be visible in a test image. These scores can be used for ranking test images according to their likeliness to contain a given target concept (search task) or for ranking target concepts according to their likeliness to be visible in a given image (classification task). The neural network or any other classification tool (e.g. Support Vector Machines) sometimes come with a built-in or added normalization step that attempts to turn the scores into actual probabilities for an image to contain the concept or of the concept to be visible in the image. However, these usually have limitations, the main one being that such calibration or normalization is done only for one dimension, either on the concepts (e.g. Soft Max normalization for exclusive concepts) or on the images (e.g. Platt Normalization [1]).

Objective: The objective of the proposed internship is to find normalization or calibration methods which takes into account both dimensions simultaneously and which additionally enforce a number of additional constraints like the consistency of the inferred probabilities across concepts organized in a hierarchy [2]. Such normalization is expected to be useful for a number of tasks, including the possibility of consistently combining probabilities for more efficiently searching images containing simultaneously several combination of target concepts. Also, the same normalization could be used both for the search and for the classification tasks. This joint normalization could also lead to a better performance in each of these tasks taken separately.

[1] Platt, John. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers. 10 (3): 61–74, (1999).

[2] Anuvabh Dutt, Denis Pellerin, Georges Quénot: Improving Image Classification using Coarse and Fine Labels. ICMR 2017: 438-442.