Skip to main content
BugViper reviews pull requests directly on GitHub using a comment-based trigger. Once a repository is indexed and BugViper is installed as a GitHub App, any team member can request an AI review by mentioning @bugviper in a PR comment. BugViper’s LangGraph-powered agent explores your codebase graph for context, then posts structured feedback as inline diff comments pinned to the exact lines where issues were found.

Prerequisites

Before you can trigger a review, the repository must be indexed in BugViper. If you have not done this yet, follow the index a repository guide first.

Request a review

1

Open a pull request on GitHub

Create or navigate to an open pull request on any repository where BugViper is installed. The review agent works against the PR diff, so the PR must have at least one changed file.
2

Comment with a review command

Add a comment to the PR containing one of the following commands:
@bugviper review
@bugviper review performs an incremental review, focusing on files that are new or changed in this PR. This is the fastest option and is appropriate for most PRs.@bugviper full review reviews all files included in the PR diff from scratch. Use this after large refactors or when you want a thorough pass across every changed file.
Use @bugviper full review after a large refactor to catch issues across all changed files, even those that appear minimally modified in the diff.
3

Wait for the rocket reaction

BugViper reacts to your comment with a 🚀 emoji to confirm it received the trigger and the review pipeline has started. This reaction appears within a few seconds. If you do not see it, check that the repository is indexed and that BugViper is installed on your GitHub organization.
4

Read the top-level review comment

When the review completes, BugViper posts a structured summary comment at the top level of the PR. This comment contains:
  • Model used and the total number of actionable inline comments posted.
  • Walkthrough table — one row per changed file, each with a one-line plain-English summary of what changed.
  • Impact Analysis — an assessment of how the changes interact with the rest of the codebase, based on the call graph and import relationships BugViper traced during exploration.
  • Positive Findings — good patterns, well-structured code, or notable improvements the agent identified.
5

Review inline diff comments

For each issue the agent found with sufficient confidence, BugViper posts an inline comment pinned to the exact line in the diff. Each inline comment includes:
  • SeverityLow, Medium, or High, indicating the potential impact of the issue.
  • Confidence — a score from 1 to 10 representing how certain the agent is that this is a real problem.
  • Suggested fix — a specific, actionable description of what to change, often including a one-line code example you can apply directly.
Only issues with a confidence score of 7/10 or higher are posted as inline comments. Lower-confidence findings are silently omitted to reduce noise.

Example review output

A typical inline comment from BugViper looks like this:
Severity: Medium — Confidence: 8/10 str(e)[:100] leaks internal error details (rate limit messages, model names) into the user-facing response. Log the exception server-side and return a generic fallback message instead to prevent accidental information disclosure.
And a walkthrough table entry:
FileSummary
src/api/routers/ingestion.pyAdds a new /embed endpoint that re-runs embedding for an already-ingested repository without re-cloning.
src/common/embedder.pyAdds idempotency check to skip nodes that already have embeddings.
If the repository is not indexed, BugViper will post a comment on the PR explaining that the repository needs to be ingested first, and provide a link to the BugViper dashboard. The 🚀 reaction will not appear in this case because the review pipeline does not start.