Multimodal neural networks better explain multivoxel patterns in the hippocampus - Intelligence Artificielle Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Multimodal neural networks better explain multivoxel patterns in the hippocampus

Résumé

The human hippocampus possesses "concept cells", neurons that fire when presented with stimuli belonging to a specific concept, regardless of the modality. Recently, similar concept cells were discovered in a multimodal network called CLIP [1]. Here, we ask whether CLIP can explain the fMRI activity of the human hippocampus better than a purely visual (or linguistic) model. We extend our analysis to a range of publicly available uni-and multi-modal models. We demonstrate that "multimodality" stands out as a key component when assessing the ability of a network to explain the multivoxel activity in the hippocampus.
Fichier principal
Vignette du fichier
Bhavin_concept_cells_paper.pdf (6.72 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03428635 , version 1 (15-11-2021)

Identifiants

  • HAL Id : hal-03428635 , version 1

Citer

Bhavin Choksi, Milad Mozafari, Rufin Vanrullen, Leila Reddy. Multimodal neural networks better explain multivoxel patterns in the hippocampus. Neural Information Processing Systems (NeurIPS) conference: 3rd Workshop on Shared Visual Representations in Human and Machine Intelligence (SVRHM 2021), Dec 2021, Virtual Conference, United States. ⟨hal-03428635⟩
381 Consultations
50 Téléchargements

Partager

Gmail Facebook X LinkedIn More