Papers
arxiv:2501.07730

Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens

Published on Jan 13
· Submitted by turkeyju on Jan 15
Authors:
,
Ju He ,
,
,
,
,

Abstract

Image tokenizers form the foundation of modern text-to-image generative models but are notoriously difficult to train. Furthermore, most existing text-to-image models rely on large-scale, high-quality private datasets, making them challenging to replicate. In this work, we introduce Text-Aware Transformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful image tokenizer that can utilize either discrete or continuous 1-dimensional tokens. TA-TiTok uniquely integrates textual information during the tokenizer decoding stage (i.e., de-tokenization), accelerating convergence and enhancing performance. TA-TiTok also benefits from a simplified, yet effective, one-stage training process, eliminating the need for the complex two-stage distillation used in previous 1-dimensional tokenizers. This design allows for seamless scalability to large datasets. Building on this, we introduce a family of text-to-image Masked Generative Models (MaskGen), trained exclusively on open data while achieving comparable performance to models trained on private data. We aim to release both the efficient, strong TA-TiTok tokenizers and the open-data, open-weight MaskGen models to promote broader access and democratize the field of text-to-image masked generative models.

Community

Paper author Paper submitter

We introduce TA-TiTok, a novel text-aware, transformer-based 1D tokenizer capable of processing both discrete and continuous tokens while ensuring accurate alignment between reconstructions and textual descriptions. Building upon TA-TiTok, we present MaskGen, a family of text-to-image masked generative models trained exclusively on open data. MaskGen achieves performance on par with models trained on proprietary datasets, while significantly reducing training costs and delivering substantially faster inference speeds.

Project page: https://tacju.github.io/projects/maskgen.html

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.07730 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.07730 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.