The new OpenAI "gpt-3.5-turbo" model is cheap but how does it perform? Check out its summarization style compared to the 10x more expensive "text-davinci-003" model on summaries of Hacker News stories.
Are these the only prompts used with the models:<p><a href="https://github.com/jiggy-ai/hn_summary/blob/master/src/summarize.py">https://github.com/jiggy-ai/hn_summary/blob/master/src/summa...</a><p>Also, how did you decide on those specific prompts?<p>Lastly, is it accurate that the text being summarized is truncated instead of for example streaming it in chunks to create a summary of summaries for longer text. If so, why even process the articles that require truncation?