Note that this is just the VAE component as used to help training and generating images, it will not let you create crazy images with natural language as used in the blog post (<a href="https://openai.com/blog/dall-e/" rel="nofollow">https://openai.com/blog/dall-e/</a>).<p>More specifically from that link:<p>> [...] the image is represented using 1024 tokens with a vocabulary size of 8192.<p>> The images are preprocessed to 256x256 resolution during training. Similar to VQVAE, each image is compressed to a 32x32 grid of discrete latent codes using a discrete VAE1 that we pretrained using a continuous relaxation.<p>OpenAI also provides the encoder and decoder models and their weights.<p>However, with the decoder model, it's now possible to say train a text-encoding model to link up to that decoder (training on say an annotated image dataset) to get something close to the DALL-E demo OpenAI posted. Or something even better!