BugViper routes all AI calls through OpenRouter, which gives you access to hundreds of models from OpenAI, Anthropic, Google, Meta, and more — all with a single API key. You can choose different models for the two phases of a review, so you can put a capable model on exploration and a cheaper one on synthesis, or use the same model for both.
How the review agent uses models
Every pull request review runs a three-node LangGraph pipeline:
- Explorer node — investigates the changed files by calling up to
MAX_TOOL_ROUNDS tools against the Neo4j code graph to build context (call chains, complexity scores, dependency trees, and more).
- Reviewer node — reads the accumulated evidence and produces structured findings: issues with severity, confidence, and exact line numbers.
- Summarizer node — generates a narrative walkthrough and aggregates the findings into the top-level PR summary comment.
The REVIEW_MODEL variable controls the Explorer and Reviewer nodes. The SYNTHESIS_MODEL variable controls the Summarizer node.
Setting your models
Add these lines to your .env file:
REVIEW_MODEL=anthropic/claude-sonnet-4-5
SYNTHESIS_MODEL=openai/gpt-4o-mini
MAX_TOOL_ROUNDS=8
The value must be a valid OpenRouter model slug exactly as it appears on the OpenRouter models page.
Recommended models
| Model | OpenRouter slug | Cost tier | Review quality | Notes |
|---|
| Claude Sonnet 4.5 | anthropic/claude-sonnet-4-5 | Medium (~$0.12/file) | Excellent | Recommended for production |
| GPT-4o mini | openai/gpt-4o-mini | Low | Good | Default; suitable for high-volume repositories |
| GPT-4o | openai/gpt-4o | Medium | Very good | Good balance of cost and capability |
| Gemini Pro | google/gemini-pro | Low | Good | Cost-effective alternative |
Start with the default gpt-4o-mini to understand your cost baseline, then upgrade REVIEW_MODEL to a more capable model for production once you know your typical review volume.
Cost guidance
Review cost depends on three factors: the model, the number of files changed, and the value of MAX_TOOL_ROUNDS.
- Typical review with
claude-sonnet-4-5: approximately $0.12 per file, using roughly 6,500 tokens per file
- Typical review with
gpt-4o-mini: substantially lower cost per file, suitable for large repositories or frequent reviewers
MAX_TOOL_ROUNDS caps how many graph tool calls the Explorer node can make per file. Each tool call adds tokens to the context window, which drives cost.
| Setting | Behavior | Best for |
|---|
8 (default) | Thorough investigation; catches complex cross-file issues | Security-sensitive or complex codebases |
4–5 | Faster, cheaper; still catches most issues | Simple files, high-volume repositories |
2–3 | Minimal context gathering; lowest cost | Trivial changes (docs, config updates) |
Controlling PR description updates
By default, BugViper updates the pull request description with a structured summary after each review. If you prefer BugViper to post comments only — without touching the PR description — set:
ENABLE_PR_DESCRIPTION_UPDATE=false
Using the same model for both phases
If you want consistent behavior across the full pipeline, set both variables to the same model:
REVIEW_MODEL=anthropic/claude-sonnet-4-5
SYNTHESIS_MODEL=anthropic/claude-sonnet-4-5
This maximizes quality at higher cost. For most teams, using a capable model for REVIEW_MODEL and a cheaper model for SYNTHESIS_MODEL provides a good quality-to-cost ratio — the Summarizer node works primarily on structured data already extracted by the Reviewer.
Using any OpenRouter model
BugViper’s synthesis phase uses robust JSON extraction that works with any model on OpenRouter — it handles code fences, prose wrapping, and raw JSON output without requiring structured-output API support. This means you can experiment with newer or less common models freely.
To find the correct slug for any model, visit the OpenRouter models page and copy the identifier shown under the model name (e.g., meta-llama/llama-3.1-70b-instruct).