TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: The correct prompt/response format to fine-tune LLM for function call

1 pointsby michaelnny12 months ago
Hi, we’re working on deploying llama3 models on a private server for a corporate, however we found that llama3 does not support function calling. And now we try to fine-tune it on our own.<p>After checking on the internet we found few datasets that were available for this specific purpose. However after reviewing the dataset we have some questions regarding the format for prompt and response. And we want to ask for your help before we start the training.<p>1. Should we include more than one function metadata in the prompt, or just the single one?<p>We found that the dataset one has one function metadata for each sample, however my intuition is that 2 or 3 might be better, with only one relevant to the query. This way the model might learn to pick the correct function.<p>2. What should be the correct format for the response?<p>I know that we should include the function name and the input arguments in the response as the labels, however I’m suggesting to design a proper format that’s easy to parser after the inference call, as we want to implement an openAI API compatible system (we already have the chat API endpoint implemented which works with the openAI client)<p>Any suggestions and ideas would be appreciated, thanks!<p>This is the example dataset we’ve looking at right now https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;glaiveai&#x2F;glaive-function-calling-v2

1 comment

michaelnny12 months ago
Hi, just want to state that we would really appreciate if you can give us some advice or suggestions on this question!