Versatile Anomaly Detection with Outlier Preserving Distribution Mapping Autoencoders
State-of-the-art deep learning methods for outlier detection make the assumption that outliers will appear far away from inlier data in the latent space produced by distribution mapping deep networks. However, this assumption fails in practice,because the divergence penalty adopted for this purpose encourages mapping outliers into the same high-probability regions as inliers. To overcome this shortcoming,we introduce a novel deep learning outlier detection method, called Outlier Preserving Distribution Mapping Autoencoder (OP-DMA), which succeeds to map outliers to low probability regions in the latent space of an autoencoder. For this we leverage the insight that outliers are likely to have a higher reconstruction error than inliers. We thus achieve outlier-preserving distribution mapping through weighting the reconstruction error of individual points by the value of a multivariate Gaussian probability density function evaluated at those points. This weighting implies that outliers will result in an overall penalty if they are mapped to low-probability regions. We show that if the global minimum of our newly proposed loss function is achieved,then our OP-DMA maps inliers to regions with a Mahalanobis distance less than \delta, and outliers to regions past this \delta, \delta being the inverse ChiSquared CDF evaluated at 1−\alpha with \alpha the percentage of outliers in the dataset. We evaluated OP-DMA on 11 benchmark real-world datasets and compared its performance against 7 different state-of-the-art outlier detection methods, including ALOCC and MO-GAAL. Our experiments show that OP-DMA outperforms the state-of-the-art methods on 7 of the datasets, and performs second best on 3 of the remaining 4 datasets, while no other method won on more than 1 dataset.