Disrupted visual input unveils the computational details of artificial neural networks for face perception

Li, Yi-Fan and Ying, Haojiang (2022) Disrupted visual input unveils the computational details of artificial neural networks for face perception. Frontiers in Computational Neuroscience, 16. ISSN 1662-5188

[thumbnail of pubmed-zip/versions/1/package-entries/fncom-16-1054421/fncom-16-1054421.pdf] Text
pubmed-zip/versions/1/package-entries/fncom-16-1054421/fncom-16-1054421.pdf - Published Version

Download (3MB)

Abstract

Background: Convolutional Neural Network (DCNN), with its great performance, has attracted attention of researchers from many disciplines. The studies of the DCNN and that of biological neural systems have inspired each other reciprocally. The brain-inspired neural networks not only achieve great performance but also serve as a computational model of biological neural systems.

Methods: Here in this study, we trained and tested several typical DCNNs (AlexNet, VGG11, VGG13, VGG16, DenseNet, MobileNet, and EfficientNet) with a face ethnicity categorization task for experiment 1, and an emotion categorization task for experiment 2. We measured the performance of DCNNs by testing them with original and lossy visual inputs (various kinds of image occlusion) and compared their performance with human participants. Moreover, the class activation map (CAM) method allowed us to visualize the foci of the “attention” of these DCNNs.

Results: The results suggested that the VGG13 performed the best: Its performance closely resembled human participants in terms of psychophysics measurements, it utilized similar areas of visual inputs as humans, and it had the most consistent performance with inputs having various kinds of impairments.

Item Type: Article
Subjects: Universal Eprints > Medical Science
Depositing User: Managing Editor
Date Deposited: 25 Mar 2023 12:25
Last Modified: 02 Feb 2024 03:55
URI: http://journal.article2publish.com/id/eprint/1581

Actions (login required)

View Item
View Item