A Comparison of Autoencoders and Variational Autoencoders for Anomaly Detection in Dermoscopic Images

Title:  A Comparison of Autoencoders and Variational Autoencoders for Anomaly Detection in Dermoscopic Images
 
Abstract:
The early detection and diagnoses of skin abnormalities are crucial for effective treatment and management of skin diseases. This paper explores the application of deep learning techniques for skin tissue analysis, focusing on the detection of abnormalities from dermoscopic images. Unsupervised learning methods like Autoencoders (AE) and Variational Autoencoders (VAE) save resources by eliminating the need for labeled data, making them more efficient and scalable than supervised learning. We compare the performance of AE and VAE architectures in developing a robust model capable of distinguishing between benign and malignant skin lesions.
 
This study uses the HAM10000 dataset of 10,015 dermoscopic images, divided into seven classes representing both benign and malignant diagnostic categories. The dataset was split into benign (normal) and malignant (anomalous) cases. The models were trained to learn features of the normal data and generate reconstructions of these images. The reconstruction error measures how accurately the model interprets the image features. An optimal decision boundary is chosen to classify images as benign or malignant based on their reconstruction error.
 
The AE and VAE models were evaluated using accuracy, F1-score, and False Negative Rate (FNR). Minimizing FNR is crucial in healthcare as it indicates missed malignant cases. Experimental results, averaged over 30 training runs, demonstrate that the VAE outperforms the AE in accuracy (69.31% vs. 68.80%), while the AE surpasses the VAE with a higher F1-score (70.30% vs. 69.45) and a lower FNR (26.15% vs. 30.19%). These findings suggest the AE architecture is promising for automated skin cancer detection, potentially leading to more accurate and timely diagnoses and improved patient outcomes.

Go back