Skip to main content
BugViper routes all AI calls through OpenRouter, which gives you access to hundreds of models from OpenAI, Anthropic, Google, Meta, and more — all with a single API key. You can choose different models for the two phases of a review, so you can put a capable model on exploration and a cheaper one on synthesis, or use the same model for both.

How the review agent uses models

Every pull request review runs a three-node LangGraph pipeline:
  1. Explorer node — investigates the changed files by calling up to MAX_TOOL_ROUNDS tools against the Neo4j code graph to build context (call chains, complexity scores, dependency trees, and more).
  2. Reviewer node — reads the accumulated evidence and produces structured findings: issues with severity, confidence, and exact line numbers.
  3. Summarizer node — generates a narrative walkthrough and aggregates the findings into the top-level PR summary comment.
The REVIEW_MODEL variable controls the Explorer and Reviewer nodes. The SYNTHESIS_MODEL variable controls the Summarizer node.

Setting your models

Add these lines to your .env file:
REVIEW_MODEL=anthropic/claude-sonnet-4-5
SYNTHESIS_MODEL=openai/gpt-4o-mini
MAX_TOOL_ROUNDS=8
The value must be a valid OpenRouter model slug exactly as it appears on the OpenRouter models page.
ModelOpenRouter slugCost tierReview qualityNotes
Claude Sonnet 4.5anthropic/claude-sonnet-4-5Medium (~$0.12/file)ExcellentRecommended for production
GPT-4o miniopenai/gpt-4o-miniLowGoodDefault; suitable for high-volume repositories
GPT-4oopenai/gpt-4oMediumVery goodGood balance of cost and capability
Gemini Progoogle/gemini-proLowGoodCost-effective alternative
Start with the default gpt-4o-mini to understand your cost baseline, then upgrade REVIEW_MODEL to a more capable model for production once you know your typical review volume.

Cost guidance

Review cost depends on three factors: the model, the number of files changed, and the value of MAX_TOOL_ROUNDS.
  • Typical review with claude-sonnet-4-5: approximately $0.12 per file, using roughly 6,500 tokens per file
  • Typical review with gpt-4o-mini: substantially lower cost per file, suitable for large repositories or frequent reviewers

Reducing cost with MAX_TOOL_ROUNDS

MAX_TOOL_ROUNDS caps how many graph tool calls the Explorer node can make per file. Each tool call adds tokens to the context window, which drives cost.
SettingBehaviorBest for
8 (default)Thorough investigation; catches complex cross-file issuesSecurity-sensitive or complex codebases
45Faster, cheaper; still catches most issuesSimple files, high-volume repositories
23Minimal context gathering; lowest costTrivial changes (docs, config updates)

Controlling PR description updates

By default, BugViper updates the pull request description with a structured summary after each review. If you prefer BugViper to post comments only — without touching the PR description — set:
ENABLE_PR_DESCRIPTION_UPDATE=false

Using the same model for both phases

If you want consistent behavior across the full pipeline, set both variables to the same model:
REVIEW_MODEL=anthropic/claude-sonnet-4-5
SYNTHESIS_MODEL=anthropic/claude-sonnet-4-5
This maximizes quality at higher cost. For most teams, using a capable model for REVIEW_MODEL and a cheaper model for SYNTHESIS_MODEL provides a good quality-to-cost ratio — the Summarizer node works primarily on structured data already extracted by the Reviewer.

Using any OpenRouter model

BugViper’s synthesis phase uses robust JSON extraction that works with any model on OpenRouter — it handles code fences, prose wrapping, and raw JSON output without requiring structured-output API support. This means you can experiment with newer or less common models freely. To find the correct slug for any model, visit the OpenRouter models page and copy the identifier shown under the model name (e.g., meta-llama/llama-3.1-70b-instruct).