I've created a Github action to provide code reviews as annotations to PRs [1], but it's providing some pretty odd feedback [2]. I'm not sure if it's my use of openai's functions[3], or my system prompt [4], or if gpt just isn't that good at isolated code reviews.<p>[1] https://github.com/marketplace/actions/chat-gpt-code-peer-review<p>[2] https://github.com/edelauna/discord-bot-ai/pull/74/files<p>[3] https://github.com/edelauna/gpt-review/blob/main/src/openai/utils/make-review.ts#L11<p>[4] https://github.com/edelauna/gpt-review/blob/main/src/openai/utils/message-manager.ts#L13
It's your prompts they are way to casual also starting your system prompt with "You are a lazy..." will make sure you get crappy output. Tell the AI what you want to do ie. annotate PRs on github and then maybe break it further down. If you don't it expect the full code available that's why you get the unused imports "errors". You are explaining just so it can make errors but not enough to do the task you want it to do. Also scrap the json part... it knows.