This dissertation tackles the problem of entanglement in Generative Adversarial Networks (GANs). The key insight is that disentanglement in GANs can be improved by differentiating between the content, and the operations performed on that content. For example, the identity of a generated face can be thought of as the content, while the lighting conditions can be thought of as the operations. We examine disentanglement in several kinds of deep networks. We examine image-to-image translation GANs, unconditional GANs, and sketch extraction networks.
In 2015, Yazeed obtained his bachelor's degree from Purdue University in the computer graphics and visualization track, with a minor in philosophy. In 2018, he received his master's degree from King Abdullah University of Science and Technology (KAUST). He mainly learned about computer vision and the process of publication in that field. Alharbi's research is focused on using generative adversarial networks (GANs) to convert an image from one domain to another. Some examples are converting real images to paintings, or converting CGI to realistic images. More specifically, he is examining methods of generating many different outputs given one input (multimodality).