What would happen if we show the autoencoder a rotten banana instead? Well, our autoencoder didn’t learn to represent rotten bananas, so the decoder will never be able to reproduce it back correctly. The result will look like a ripe banana because that’s all the information the decoder had 🤷♂️.
If we take the original image and compute the difference with the output, we’ll see a large divergence. If they both were ripe bananas, the divergence would be small, so we immediately know there’s a problem with the input image.
One more interesting application
We can also do something different. What would happen if we teach the autoencoder to transform the input image into something else?
For example, we could teach an autoencoder to remove noise from pictures by showing it images with noise and expecting back their corresponding clean version.