Convert your vector embeddings into a set of questions and their ideal responses. Use this dataset to test your LLM and catch failures caused by prompt or RAG updates.<p>Get started in 3 lines of code:<p>```<p>pip3 install fiddlecube<p>```<p>```<p>from fiddlecube import FiddleCube<p>fc = FiddleCube(api_key="<api-key>")
dataset = fc.generate(
[
"The cat did not want to be petted.",
"The cat was not happy with the owner's behavior.",
],
10,
)
dataset<p>```<p>Generate your API key: <a href="https://dashboard.fiddlecube.ai/api-key">https://dashboard.fiddlecube.ai/api-key</a><p># Ideal QnA datasets for testing, eval and training LLMs<p>Testing, evaluation or training LLMs requires an ideal QnA dataset aka the golden dataset.<p>This dataset needs to be diverse, covering a wide range of queries with accurate responses.<p>Creating such a dataset takes significant manual effort.<p>As the prompt or RAG contexts are updated, which is nearly all the time for early applications, the dataset needs to be updated to match.<p># FiddleCube generates ideal QnA from vector embeddings<p>- The questions cover the entire RAG knowledge corpus.<p>- Complex reasoning, safety alignment and 5 other question types are generated.<p>- Filtered for correctness, context relevance and style.<p>- Auto-updated with prompt and RAG updates.