With GPT-3 getting so many people interested in NLP, and with OpenAI's recently announced pricing plan putting it out of many people's reach, I thought it might be useful for some to see how easy it is to deploy your own GPT-2 API.<p>This project uses a couple tools:<p>- Cortex: An open source model serving platform I help maintain. <a href="https://github.com/cortexlabs/cortex" rel="nofollow">https://github.com/cortexlabs/cortex</a><p>- Hugging Face's Transformers: An open source library for using popular language models, like GPT-2. <a href="https://github.com/huggingface/transformers" rel="nofollow">https://github.com/huggingface/transformers</a><p>This project uses a vanilla pre-trained GPT-2 and PyTorch. If you want to use TensorFlow/ONNX, that's supported as well ( <a href="https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/text-generator" rel="nofollow">https://github.com/cortexlabs/cortex/tree/master/examples/te...</a> ).<p>If you want to finetune GPT-2 on your own text (a la AI Dungeon), I'd suggest using gpt-2-simple and deploying with Cortex: <a href="https://github.com/minimaxir/gpt-2-simple" rel="nofollow">https://github.com/minimaxir/gpt-2-simple</a><p>Lastly, by following this example, you can deploy your API locally (where inference will probably be slow, depending on your hardware, but will cost you $0) or to a cluster on AWS, which Cortex can spin up/manage for you.