Lectures
You will be able to download the lectures here. We will try to upload lectures prior to their corresponding classes.
-
01/27/2025 Data-Driven Graphics + image blending
tl;dr: Using large-scale data to synthesize an image [pdf] [pptx]
Reading list:
- Poisson Image Editing, Pérez et al. in SIGGRAPH, 2003
- Scene Completion using Millions of Photographs, Hays et al. in TOG, 2007
- CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs, Johnson et al. in TVCG, 2010
- Modeling the shape of the scene: A holistic representation of the spatial envelope, Oliva et al. in IJCV, 2001
- Semantic photo synthesis, Johnson et al. in Computer Graphics Forum, 2006
- Sketch2Photo: internet image montage, Chen et al. in SIGGRAPH, 2019
- Photo Clip Art, Lalonde et al. in SIGGRAPH, 2007
- ShadowDraw: Real-Time User Guidance for Freehand Drawing, Lee et al. in SIGGRAPH, 2011
- AverageExplorer: Interactive Exploration and Alignment of Visual Data Collections, Zhu et al. in SIGGRAPH, 2014
- Image Deformation Using Moving Least Squares, Schaefer et al. in SIGGRAPH, 2006
-
01/29/2025 Convolutional Network for Image Synthesis
tl;dr: Neural networks can synthesize high-quality images by leveraging prior knowledge from millions of images. [pdf] [pptx]
Reading list:
- Deep Learning Book, Chapter 6 and 9.
- Szeliski Book, Chapter 5.3 and 5.4.
- Gradient-based learning applied to document recognition, Lecun et al., Proc of IEEE, 1998.
- Learning to Generate Chairs, Tables and Cars with Convolutional Networks, Dosovitskiy et al., PAMI 2017 (CVPR 2015)
- Deconvolution and Checkerboard Artifacts
- Colorful Image Colorization, Zhang et al., ECCV 2016.
- Notes on Backpropagation and CNNs: Olah and 231n
-
02/03/2025 Perceptual Loss, Generative Adversarial Networks (part 1) [pdf] [pptx]
Reading List:
- Szeliski Book, Chapter 5.5.3
- Murphy Book, Chapter 26
- Generative Adversarial Networks, Goodfellow et al., NeurIPS 2014.
- GAN Tutorial (NeurIPS 2016), by Ian Goodfellow.
- ICCV 2017 Tutorial on GANs.
- CVPR 2018 Tutorial on GANs.
- Image style transfer using convolutional neural networks, Gatys et al., CVPR 2016.
- Perceptual Losses for Real-Time Style Transfer and Super-Resolution, Johnson et al., ECCV 2016.
- Generating images with perceptual similarity metrics based on deep networks. Dosovitskiy and Brox. NeurIPS, 2016
- The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. Zhang et al., CVPR 2018
-
02/05/2025 Generative Adversarial Networks (part 2) [pdf] [pptx]
Reading List:
- Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, Radford et al., ICLR 2016
- Improved Training of Wasserstein GANs, Gulrajani et al., NeurIPS 2017
- Least Squares Generative Adversarial Networks, Mao et al. 2017
- Progressive Growing of GANs for Improved Quality, Stability, and Variation, Karras et al, ICLR 2018
- Spectral Normalization for Generative Adversarial Networks, Miyato et al., ICLR 2018
- A Style-Based Generator Architecture for Generative Adversarial Networks, Karras et al., CVPR 2019
- Analyzing and Improving the Image Quality of StyleGAN, Karras et al., 2020
- Alias-Free Generative Adversarial Networks, Karras et al., 2021
- Training Generative Adversarial Networks with Limited Data, Karras et al., 2020
- Differentiable Augmentation for Data-Efficient GAN Training, Zhao et al., 2020
- Image Manipulation with Perceptual Discriminators, Sungatullina et al., ECCV 2018.
- Projected GANs Converge Faster, Sauer et al., 2021
- Ensembling Off-the-shelf Models for GAN Training, Kumari et al., 2021
-
02/10/2025 Generative Models Zoo (part I)
tl;dr: We discuss the theory and practice of Vartional Autoencoder and Autoregressive Models. [pdf] [pptx]
Reading List:
- Deep Learning Book, Chapter 20
- Murphy Book, Chapter 21 and 22
- Auto-Encoding Variational Bayes, Kingma and Welling, 2013
- Autoencoding beyond pixels using a learned similarity metric, Larsen et al, CVPR 2016
- Conditional Image Generation with PixelCNN Decoders, Oord et al, 2016
- Generating Diverse High-Fidelity Images with VQ-VAE-2, Razavi et al., 2019
- Taming Transformers for High-Resolution Image Synthesis, Esser et al, 2021
- How to Train Your Energy-Based Models, Song and Kingma, 2021
-
02/12/2025 Generative Models Zoo (part II)
tl;dr: We continue our discussion of Autoregressive Models and start exploring Diffusion Models. [pdf] [pptx]
Reading List:
- Murphy Book, Chapter 22 and 25
- Masked Autoencoders Are Scalable Vision Learners, He et al., 2021
- MaskGIT: Masked Generative Image Transformer, Chang et al., 2022
- Scalable Image Generation via Next-Scale Prediction, Tian et al., 2024
- Denoising Diffusion Probabilistic Models, Ho et al., 2020
- Score-Based Generative Modeling through Stochastic Differential Equations (SDE), Song et al., ICLR 2021
-
02/19/2025 Generative Models (student presentation)
Reading List:
- Generating Diverse High-Fidelity Images with VQ-VAE-2 (VQ-VAE-2)
- Taming Transformers for High-Resolution Image Synthesis (VQGAN)
- Generative Pretraining from Pixels (ImageGPT)
- MaskGIT: Masked Generative Image Transformer (MaskGIT)
- Scalable Image Generation via Next-Scale Prediction
- Analyzing and Improving the Image Quality of StyleGAN (StyleGAN2)
- Alias-Free Generative Adversarial Networks (StyleGAN3)
- Which Training Methods for GANs do actually Converge?
- StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets (StyleGAN-XL)
- Denoising Diffusion Implicit Models (DDIM)
- Score-Based Generative Modeling through Stochastic Differential Equations (SDE)
- Elucidating the Design Space of Diffusion-Based Generative Models
- Analyzing and Improving the Training Dynamics of Diffusion Models
- Scalable Diffusion Models with Transformers (DIT)
- Consistency Models
- Improved Techniques for Training Consistency Models
- On Distillation of Guided Diffusion Models
-
02/24/2025 Image-to-Image Translation and Conditional Generative Models (part I) [pdf] [pptx]
Reading List:
- Image-to-Image Translation with Conditional Adversarial Networks, Isola et al, CVPR 2017
- Toward Multimodal Image-to-Image Translation, Zhu et al, NeurIPS 2017
- Multi-Agent Diverse Generative Adversarial Networks, Ghosh et al, CVPR 2018
- Diversity-Sensitive Conditional Generative Adversarial Networks, Yang et al, ICLR 2019
- High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs, Wang et al, CVPR 2018.
- Semantic Image Synthesis with Spatially-Adaptive Normalization, Park et al, CVPR 2019
-
02/26/2025 Image-to-Image Translation and Conditional Generative Models (part II) [pdf] [pptx]
Reading List:
- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, Zhu et al, ICCV 2017
- Unsupervised Image-to-Image Translation Networks, Liu et al., NeurIPS 2017
- Learning from Simulated and Unsupervised Images through Adversarial Training, Shrivastava et al, CVPR 2017
- Multimodal Unsupervised Image-to-Image Translation, Huang et al., ECCV 2018
- SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng et al., ICLR 2022
- Adding Conditional Control to Text-to-Image Diffusion Models, Zhang et al., ICCV 2024
-
03/10/2025 Style and Content, Texture Synthesis
tl;dr: How to control the style and content of your image with Deep Learning [pdf] [pptx]
Reading List:
- Separating Style and Content, Tenenbaum & Freeman, Neurips 1996
- Texture Synthesis by Non-parametric Sampling, Efros and Leung, ICCV 1999
- Image Quilting for Texture Synthesis and Transfer, Efros and Freeman, SIGGRAPH 2001.
- Image Analogies, Hertzmann et al, SIGGRAPH 2001
- Deep Photo Style Transfer, Luan et al., CVPR 2017
-
03/12/2025 Text-to-Image Synthesis
tl;dr: How to synthesize a photorealistic image given a text description [pdf] [pptx]
Reading List:
- A text-to-picture synthesis system for augmenting communication, Zhu et al., AAAI 2007
- Generating Images from Captions with Attention, Mansimov et al., ICLR 2016
- Generative Adversarial Text to Image Synthesis. Reed et al., ICML 2016
- StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks. Zhang et al., ICCV 2017
- High-Resolution Image Synthesis with Latent Diffusion Models, Rombach et al., CVPR 2022
- Hierarchical Text-Conditional Image Generation with CLIP Latents, Ramesh et al., arXiv 2022
- Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. Saharia, Chan, et al., NeurIPS 2022
- Scaling Autoregressive Models for Content-Rich Text-to-Image Generation, Yu et al., TMLR 2022
- Scaling up GANs for Text-to-Image Synthesis, Kang et al., CVPR 2023
-
03/17/2025 Conditional Image Synthesis (Student Presentation)
tl;dr:
Reading List:
- The Perception-Distortion Tradeoff
- Image Analogies
- Controlling Perceptual Factors in Neural Style Transfer
- Visual attribute transfer through deep image analogy
- Deep Photo Style Transfer
- Zero-Shot Text-to-Image Generation (DALL·E)
- Hierarchical Text-Conditional Image Generation with CLIP Latents (DALL·E 2)
- Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (Imagen)
- Scaling Autoregressive Models for Content-Rich Text-to-Image Generation (PARTI)
- Muse: Text-To-Image Generation via Masked Generative Transformers (MUSE)
- Scene-Based Text-to-Image Generation with Human Priors (Make-a-Scene)
- MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
- Adding Conditional Control to Text-to-Image Diffusion Models (ControlNet)
- One-step Diffusion with Distribution Matching Distillation
- Adversarial Diffusion Distillation
- SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
- Scaling up GANs for Text-to-Image Synthesis
- StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
-
03/20/2025 Image Editing with Optimization (part I)
tl;dr: Use an optimization algorithm in a learned space to achieve photo manipulation [pdf] [pptx]
Reading List:
- Generative Visual Manipulation on the Natural Image Manifold, Zhu et al., ECCV 2016
- Semantic Photo Manipulation with a Generative Image Prior, Bau et al., SIGGRAPH 2019
- Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?, Abdal et al., ICCV 2019
- GANSpace: Discovering Interpretable GAN Controls, Härkönen et al., NeurIPS 2020
- Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation, Pan et al., ECCV 2020
- GAN Inversion: A Survey. Xia et al. 2021
-
03/24/2025 Image Editing and Optimization (part II) [pdf] [pptx]
Reading List:
- An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion, Gal et al., ICLR 2023
- DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, Ruiz et al., CVPR 2023
- Multi-Concept Customization of Text-to-Image Diffusion, Kumari et al., CVPR 2023
- LoRA: Low-Rank Adaptation of Large Language Models, Hu et al., arXiv 2021
- SVDiff: Compact Parameter Space for Diffusion Fine-Tuning, Han et al., ICCV 2023
- IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models, Ye et al., CVPR 2024
- Encoder-based domain tuning for fast personalization of text-to-image models, Gal et al., SIGGRAPH 2023
-
03/26/2025 Image Editing (student presentation)
Reading List:
- Colorization Using Optimization
- A Closed Form Solution to Natural Image Matting
- PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing
- Deep Image Prior
- StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
- Encoding in style: a stylegan encoder for image-to-image translation
- Rewriting a Deep Generative Model
- DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
- An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
- SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations
- Prompt-to-Prompt Image Editing with Cross Attention Control
- Imagic: Text-Based Real Image Editing with Diffusion Models
- InstructPix2Pix Learning to Follow Image Editing Instructions, Brooks et al., 2022.
- Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing
- Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
- DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing