TLDR:<p>"The primary reason to introduce a new example is when the LLM is incorrectly identifying technology as ABSTRACT vs not, missing connections, or even missing resources entirely. However, we don’t have unlimited tokens for every example we might need. An optimization we came up with here is Dynamic Examples using pre-filtering. Instead of providing examples that would generalize to everything, we focus the examples on what we can guess is in the queries.<p>We extract a list of technologies using word lists, which is easier than extracting their intents, and if we don’t find many matches, we assume that more ABSTRACT resources are present. Once extracted, we can then create a custom prompt by selecting specific examples to the technologies mentioned in the query and a set of bedrock examples including baseline rules for the different actions that expand language understanding."