Pre-training image-language transformers for open-vocabulary tasks
Abstract
Pre-training vision and language transformer models using a mixture of diverse tasks, including image-text captioning and object-aware strategies, improves performance on text-generative vision+language tasks.
We present a pre-training approach for vision and language transformer models, which is based on a mixture of diverse tasks. We explore both the use of image-text captioning data in pre-training, which does not need additional supervision, as well as object-aware strategies to pre-train the model. We evaluate the method on a number of textgenerative vision+language tasks, such as Visual Question Answering, visual entailment and captioning, and demonstrate large gains over standard pre-training methods.
Get this paper in your agent:
hf papers read 2209.04372 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 171
Browse 171 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 103
Collections including this paper 0
No Collection including this paper