I'm working on a use case where every PR created in a repository needs to be evaluated against the topic that will be tagged to it. Based on which the AI feedback is to be given, based on the feedback, changes will need to be made, until done, the merging should be blocked. when the updation of code happens, it should be able to check it against the previously given feedback. Now it can ask for further improvements or give it a green light and unblock the merging.<p>To implement this, I'm looking for ideas to implement as the LLM shouldn't chase perfection and be generalized to make a call "this is enough".<p>If you have worked on something like this or have any ideas on how to get started, would really appreciate the help.<p>Thanks