This repository compares the representation of different face categories across layers of DCNN, from early pooling layers to fully connected layers to see when categories are clearly represented/separated.
To this end, I used common visualization (or dimension reduction) methods including:
- PCA
- t-SNE: Developed by Laurens van der Maaten and Geoffrey Hinton (see the original paper here)
- UMAP: Developed by Leland McInnes, John Healy, and James Melville (see the original paper here, and documentation is available via ReadTheDocs)
To start with, I first compared the performance of these three methods on MNIST-Fashion database:
Visual inspection shows that UMAP has done a better job as MNIST-Fashion categories are better clustered compared to PCA and t-SNE. Moreover, comparing the computation time on my laptop showed that UMAP also wins this competition (note that PCA is the quickest but did not provide nice resluts):
Method | Elapsed time |
---|---|
PCA | 1.34 sec |
TSNE | 6083.97 sec |
UMAP | 54 sec |
Next step is to apply the methods to my face image categories....