A simple implementation of variational Auto encoders using Mnist dataset in tensorflow.
-
Updated
Jun 17, 2023 - Jupyter Notebook
A simple implementation of variational Auto encoders using Mnist dataset in tensorflow.
Simple VAE face generator
Built a model to create highlights/summary of given video. The results of this study shows that, with a remarkable similarity index(SSIM) of 98%, the recommended technique is quite successful in choosing keyframes that are both educational and distinctive from the original movie
Leveraging the power of LD variational autoencoders to identify latent representations as dim red embeddings of sc data
An implementation of Variational Auto-encoder with TSNE Visualization on MNIST dataset.
A repository for generating synthetic data (images) using various DL/ML models.
A variational Autoencoder (VAE) to generate human faces based on the CelebA dataset. A VAE is a generative model that learns to represent high-dimensional data (like images) in a lower-dimensional latent space, and then generates new data from this space.
This repo is devoted to the pracicals of the course Deep Learning (5204DLFV6Y) realised at the Univeristy of Amsterdam, Fall 2020.
This repository contains the code, data and scripts used to write the Bachelor Thesis "Latent representations for traditional music analysis and generation".
Implementing a Conditional VAE for video prediction with PyTorch
Variational Autoencoder (VAE) trained on MNIST
Convolutional Variational Autoencoder on VizdoomTakeCover
A variational autoencoder can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.
Handwritten Digit Generation with VAE and GAN are applied.
Topics include function approximation, learning dynamics, using learned dynamics in control and planning, handling uncertainty in learned models, learning from demonstration, and model-based and model-free reinforcement learning.
Utilized a VAE (Variational Autoencoder) and CGAN (Conditional Generative Adversarial Network) models to generate synthetic chatter signals, addressing the challenge of imbalanced data in turning operations. Compared othe performance of synthetic chatter signals.
Solutions for Advanced Image Analysis course assignments, featuring model designs for image summation and generation with MNIST, and style transfer using CycleGAN with MNIST and SVHN datasets.
Add a description, image, and links to the vae-implementation topic page so that developers can more easily learn about it.
To associate your repository with the vae-implementation topic, visit your repo's landing page and select "manage topics."