Hi HN! A month ago, I shared LlamaPReview in Show HN[1]. Since then, we've grown to 2000+ repos (60%+ public) with 16k+ combined stars. More importantly, we've made significant improvements in both efficiency and review quality.<p>Key improvements in recent month:<p>1. ReAct-based Review Pipeline
We implemented a ReAct (Reasoning + Acting) pattern that mimics how senior developers review code. Here's a simplified version:<p><pre><code> ```python
def react_based_review(pr_context) -> Review:
# Step 1: Initial Assessment - Understand the changes
initial_analysis = initial_assessment(pr_context)
# Step 2: Deep Technical Analysis
deep_analysis = deep_analysis(pr_context, initial_analysis)
# Step 3: Final Synthesis
return synthesize_review(pr_context, initial_analysis, deep_analysis)
```
</code></pre>
2. Two-stage format alignment pipeline<p><pre><code> ```python
def review_pipeline(pr) -> Review:
# Stage 1: Deep analysis with large LLM
review = react_based_review(pr_context)
# Stage 2: Format standardization with small LLM
return format_standardize(review)
```
</code></pre>
This two-stage approach (large LLM for analysis + small LLM for format standardization) ensures both high-quality insights and consistent output format.<p>3. Intelligent Skip Analysis
We now automatically identify PRs that don't need deep review (docs, dependencies, formatting), reducing token consumption by 40%. Implementation:<p><pre><code> ```python
def intelligent_skip_analysis(pr_changes) -> Tuple[bool, str]:
skip_conditions = {
'docs_only': check_documentation_changes,
'dependency_updates': check_dependency_files,
'formatting': check_formatting_only,
'configuration': check_config_files
}
for condition_name, checker in skip_conditions.items():
if checker(pr_changes):
return True, f"Optimizing review: {condition_name}"
return False, "Proceeding with full review"
```
</code></pre>
Key metrics since launch:<p><pre><code> - 2000+ repos using LlamaPReview
- 60% public, 40% private repositories
- 40% reduction in token consumption
- 30% faster PR processing
- 25% higher user satisfaction
</code></pre>
Privacy & Security:<p><pre><code> Many asked about code privacy in the last thread. Here's how we handle it:
- All PR review processing happens in-memory
- No permanent storage of repository code
- Immediate cleanup after PR review
- No training on user code
</code></pre>
What's next:<p><pre><code> We are actively working on GraphRAG-based repository understanding for better in-depth code review analysis and pattern detection.
</code></pre>
Links:<p><pre><code> [1] Previous Show HN discussion: [https://news.ycombinator.com/item?id=41996859]
[2] Technical deep-dive: [https://github.com/JetXu-LLM/LlamaPReview-site/discussions/3]
[3] Link for Install (free): [https://github.com/marketplace/llamapreview]
</code></pre>
Happy to discuss our approach to privacy, technical implementation, or future plans!