BugViper only reviews repositories that have been indexed. Index your repository before requesting a review.
Triggering a review
Post a comment on any GitHub pull request with one of these commands:| Command | What it does |
|---|---|
@bugviper review | Reviews only the files changed in this PR (incremental) |
@bugviper full review | Reviews every file in the repository |
What the agent does
The review pipeline runs a three-node LangGraph graph for each file under review: Explorer → Reviewer → Summarizer.Explorer — investigating the codebase
The Explorer node runs a ReAct loop, iteratively calling tools against the Neo4j knowledge graph to build context about the code being reviewed. It has access to 19 tools that cover the full range of graph queries:
- Tracing call chains and callers upstream
- Fetching class hierarchies and inheritance trees
- Calculating cyclomatic complexity for functions in the diff
- Estimating blast radius (how many callers would break if a function changes)
- Searching for related functions, variables, modules, and file content
- Retrieving full file source and language statistics
Reviewer — generating structured findings
The Reviewer node reads the Explorer’s full message history and the file context, then makes a single structured LLM call to produce a list of issues and positive findings. Each issue includes:
- Title and description of the problem
- Severity: Low, Medium, or High
- Confidence score: 1–10
- Suggested fix with a code snippet you can apply directly
- Exact line numbers in the diff
What you see on GitHub
After the agent finishes, BugViper posts two types of comments to your pull request: Top-level summary comment — posted once per review run, containing:- The model used and the total number of actionable comments
- A walkthrough table listing every reviewed file and a one-line summary of what changed
- An Impact Analysis section describing the broader effect of the changes
- A Positive Findings section highlighting good patterns found in the diff
Example: inline security comment
The agent might flag an issue like this on a specific line:
Severity: Medium — Confidence: 8/10
LLM error details (rate limits, model names, API keys) are exposed to the user via str(e)[:100]. Log the error server-side and return a clean fallback message to prevent accidental information disclosure.
The confidence filter
BugViper applies a hard filter: only issues scored 7/10 or higher in confidence are posted to GitHub. Issues below this threshold are discarded. This means every comment you receive reflects something the agent is reasonably certain is a real problem — not a guess.Cost
A typical review costs approximately $0.12 per file when usingclaude-sonnet. The total cost scales with the number of files reviewed and the number of tool rounds the Explorer uses. Use the incremental @bugviper review command (not full review) for routine PR feedback to keep costs manageable.