Latest Posts
-
The Elements of GANs, Part 3: Conditional GANs
Rather than just generating a realistic output with our GAN, in many cases we’d like to generate a plausible output given some input. In fact, I think that most of the big applications for GANs other than for the sake of cool demonstrations fall in this category, one way or another. Such GANs are called conditional GANs (CGANs), as they generate a distribution of outputs conditioned to a given input. The extension of GANs to make them conditional is quite straightforward and existing GANs can be adapted fairly easily into conditional designs.
-
The Elements of GANs, Part 2: Wasserstein GANs and the Gradient Penalty
Training GANs remains a bit of an art and one can easily find that small changes in architecture and training procedures make a huge difference to the end results. The effects of various tricks and techniques are not always predictable, but that’s not to say that you can’t save yourself some trouble by adopting some best practices. In my work with GANs, perhaps the one thing that improved my quality of life most was adopting the Wasserstein loss together with the gradient penalty. Despite being introduced only a couple of years ago, Wasserstein GANs (WGANs) have become one of the standard ways to implement GANs.
-
The Elements of GANs, Part 1: The Anatomy of a GAN
Generative Adversarial Networks (GANs) have developed extremely quickly in a few short years. It’s easy to find numerous examples of them generating highly realistic artificial samples of things such as human faces or works of art. While the base version of GANs just converts random noise into data samples, there is perhaps more application in using GANs conditionally, when we can use them to convert data between different domains — for example, turning drawings into realistic landscapes. GAN variants that can invert the generation process and recover latent variables from samples are also achieving impressive results in unsupervised learning.
-
Why Atmospheric Scientists Should Care About GANs
A paper recently came out where me and coauthors demonstrate that generative adversarial networks (GANs) can be used to estimate cloud vertical structures based on incomplete data. In the paper, we briefly argued that GANs should be applicable to many problems found in atmospheric science and remote sensing. As a first entry to this blog, I hope to use this post to elaborate a bit more on these points, arguing that GANs are a particularly interesting, yet so-far barely explored, tool for atmospheric science remote sensing applications.