Latent-Insensitive Autoencoders for Anomaly Detection

Battikh, Muhammad S. and Lenskiy, Artem A. (2021) Latent-Insensitive Autoencoders for Anomaly Detection. Mathematics, 10 (1). p. 112. ISSN 2227-7390

[thumbnail of mathematics-10-00112-v2.pdf] Text
mathematics-10-00112-v2.pdf - Published Version

Download (3MB)

Abstract

Reconstruction-based approaches to anomaly detection tend to fall short when applied to complex datasets with target classes that possess high inter-class variance. Similar to the idea of self-taught learning used in transfer learning, many domains are rich with similar unlabeled datasets that could be leveraged as a proxy for out-of-distribution samples. In this paper we introduce the latent-insensitive autoencoder (LIS-AE) where unlabeled data from a similar domain are utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder such that it is only capable of reconstructing one task. We provide theoretical justification for the proposed training process and loss functions along with an extensive ablation study highlighting important aspects of our model. We test our model in multiple anomaly detection settings presenting quantitative and qualitative analysis showcasing the significant performance improvement of our model for anomaly detection tasks

Item Type: Article
Uncontrolled Keywords: anomaly detection; autoencoders; one-class classification; principal components analysis; self-taught learning; negative examples
Subjects: Universal Eprints > Medical Science
Depositing User: Managing Editor
Date Deposited: 07 Nov 2022 09:22
Last Modified: 18 Sep 2023 09:43
URI: http://journal.article2publish.com/id/eprint/62

Actions (login required)

View Item
View Item