TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: LlamaPReview – AI code reviewer trusted by 2000 repos, 40%+ effective

2 点作者 Jet_Xu6 个月前
Hi HN! A month ago, I shared LlamaPReview in Show HN[1]. Since then, we&#x27;ve grown to 2000+ repos (60%+ public) with 16k+ combined stars. More importantly, we&#x27;ve made significant improvements in both efficiency and review quality.<p>Key improvements in recent month:<p>1. ReAct-based Review Pipeline We implemented a ReAct (Reasoning + Acting) pattern that mimics how senior developers review code. Here&#x27;s a simplified version:<p><pre><code> ```python def react_based_review(pr_context) -&gt; Review: # Step 1: Initial Assessment - Understand the changes initial_analysis = initial_assessment(pr_context) # Step 2: Deep Technical Analysis deep_analysis = deep_analysis(pr_context, initial_analysis) # Step 3: Final Synthesis return synthesize_review(pr_context, initial_analysis, deep_analysis) ``` </code></pre> 2. Two-stage format alignment pipeline<p><pre><code> ```python def review_pipeline(pr) -&gt; Review: # Stage 1: Deep analysis with large LLM review = react_based_review(pr_context) # Stage 2: Format standardization with small LLM return format_standardize(review) ``` </code></pre> This two-stage approach (large LLM for analysis + small LLM for format standardization) ensures both high-quality insights and consistent output format.<p>3. Intelligent Skip Analysis We now automatically identify PRs that don&#x27;t need deep review (docs, dependencies, formatting), reducing token consumption by 40%. Implementation:<p><pre><code> ```python def intelligent_skip_analysis(pr_changes) -&gt; Tuple[bool, str]: skip_conditions = { &#x27;docs_only&#x27;: check_documentation_changes, &#x27;dependency_updates&#x27;: check_dependency_files, &#x27;formatting&#x27;: check_formatting_only, &#x27;configuration&#x27;: check_config_files } for condition_name, checker in skip_conditions.items(): if checker(pr_changes): return True, f&quot;Optimizing review: {condition_name}&quot; return False, &quot;Proceeding with full review&quot; ``` </code></pre> Key metrics since launch:<p><pre><code> - 2000+ repos using LlamaPReview - 60% public, 40% private repositories - 40% reduction in token consumption - 30% faster PR processing - 25% higher user satisfaction </code></pre> Privacy &amp; Security:<p><pre><code> Many asked about code privacy in the last thread. Here&#x27;s how we handle it: - All PR review processing happens in-memory - No permanent storage of repository code - Immediate cleanup after PR review - No training on user code </code></pre> What&#x27;s next:<p><pre><code> We are actively working on GraphRAG-based repository understanding for better in-depth code review analysis and pattern detection. </code></pre> Links:<p><pre><code> [1] Previous Show HN discussion: [https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41996859] [2] Technical deep-dive: [https:&#x2F;&#x2F;github.com&#x2F;JetXu-LLM&#x2F;LlamaPReview-site&#x2F;discussions&#x2F;3] [3] Link for Install (free): [https:&#x2F;&#x2F;github.com&#x2F;marketplace&#x2F;llamapreview] </code></pre> Happy to discuss our approach to privacy, technical implementation, or future plans!

暂无评论

暂无评论