InteractDiffusion: Interaction Control in
Text-to-Image Diffusion Models

1Nanyang Technological University,  2Universiti Malaya

Stable Diffusion conditions on text caption only, while GLIGEN conditions on extra layout input. Our proposed InteractionDiffusion conditions on extra interaction label and its location shown by the shaded area. It effectively controls the interaction in generated samples based on given interaction control information in contrast of the "object placing" effect in baselines.

Abstract

Large-scale text-to-image (T2I) diffusion models have showcased incredible capabilities in generating coherent images based on textual descriptions, enabling vast applications in content generation. While recent advancements have introduced control over factors such as object localization, posture, and image contours, a crucial gap remains in our ability to control the interactions between objects in the generated content. Well-controlling interactions in generated images could yield meaningful applications, such as creating realistic scenes with interacting characters. In this work, we study the problems of conditioning T2I diffusion models with Human-Object Interaction (HOI) information, consisting of a triplet label (person, action, object) and corresponding bounding boxes. We propose a pluggable interaction control model, called InteractDiffusion that extends existing pre-trained T2I diffusion models to enable them being better conditioned on interactions. Specifically, we tokenize the HOI information and learn their relationships via interaction embeddings. A conditioning self-attention layer is trained to map HOI tokens to visual tokens, thereby conditioning the visual tokens better in existing T2I diffusion models. Our model attains the ability to control the interaction and location on existing T2I diffusion models, which outperforms existing baselines by a large margin in HOI detection score, as well as fidelity in FID and KID.

Method

Our proposed pluggable Interaction Module \(I\) seamlessly incorporate interaction information into an existing T2I diffusion model (left). The proposed module \(I\) (right) consists of Interaction Tokenizer that transforms interaction information into meaningful tokens, Interaction Embedding that incorporates intricate interaction relationship, and Interaction Self-Attention that integrates interaction control information into Visual Tokens of the existing T2I diffusion model.

Results

Qualitative Results

1. Controlling Stable Diffusion

Qualitative results

2. Controlling DreamBooth Personalized SD models

Qualitative results

3. Controlling with Different Actions

Different action

4. Controlling with Different Objects

Different objects

Comparison to Recent Works

Quantitative Compare

Quantitative comparison between InteractDiffusion and existing baselines in terms of generated image quality scores in FID and KID and HOI detection score in mAP. GLIGEN* is HICO-DET fine-tuned GLIGEN model. The last row shows the Detection Score from real images. ↓ indicates the lower the better, and vice versa.

BibTeX

If you use our work in your research, please cite:

@inproceedings{hoe2023interactdiffusion,
                title={InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models}, 
                author={Jiun Tian Hoe and Xudong Jiang and Chee Seng Chan and Yap-Peng Tan and Weipeng Hu},
                year={2024},
                booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
          }
}