Variational Autoencoder in TensorFlow
Variational Autoencoder in TensorFlow
“Deep Learning has already surpassed human-level performance on image recognition tasks. On the other hand, in unsupervised learning, Deep Neural networks like Generative Adversarial Networks ( GANs ) have been quite popular for generating realistic synthetic images and various other applications. Before GAN was invented, there were various fundamental and well-known Neural-Network based Architectures for Generative Modeling. And today, we will take you back in time and discuss one of the most popular pre-GAN eras Deep Generative Model known as Variational Autoencoder. In this tutorial, you will be introduced to Variational Autoencoder in TensorFlow.
In our previous post, we introduced you to Autoencoders and covered various aspects of it both theoretically and practically. We learned why autoencoders are not purely generative in nature; they are only good at generating images when you manually pick points in latent space and feed through the decoder. We validated our hypothesis by experimenting with Autoencoders on two datasets: Fashion-MNIST and Google’s Cartoon Set Data.
Do check out the post Introduction to Autoencoder in TensorFlow, if you haven’t already!..”
Source: learnopencv.com/variational-autoencoder-in-tensorflow/