Skip to main content
The Ask Agent is BugViper’s conversational interface to your codebase. Unlike search, which requires you to know what you’re looking for, the Ask Agent lets you describe what you want to understand in plain English and reasons across your entire Neo4j knowledge graph to give you a complete answer. You can ask about architecture, call chains, inheritance hierarchies, complexity hotspots, or anything else that would otherwise require you to grep and read through dozens of files manually.

How the Ask Agent works

The agent is a ReAct LLM connected to your Neo4j graph via 19 code exploration tools. When you submit a question, the agent:
  1. Decides which tools to call based on your question.
  2. Runs those tools against the graph — for example, searching for a function by name, tracing its callers, or looking up class inheritance.
  3. Uses the results to decide whether to call more tools or synthesize a final answer.
  4. Cites the source files and shows the relevant code inline in its response.
On average, the agent makes 13 or more tool calls per query before producing an answer. This depth is what allows it to trace multi-hop relationships — like finding all callers of a function that is itself called from middleware — rather than returning a single flat result.

How to use it

Open the Ask Agent page from the BugViper dashboard sidebar. Select the repository you want to query, type your question in the chat input, and press Enter. The agent’s response appears in the conversation thread with cited source files linked to the exact lines in the graph. The Ask Agent endpoint is also available via the REST API:
curl -X POST https://your-bugviper-instance/api/v1/rag/answer \
  -H "Authorization: Bearer YOUR_FIREBASE_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Which functions call the authentication middleware?",
    "repo_id": "acme-corp/my-api"
  }'
The response includes the agent’s answer and a list of cited sources with file paths and line numbers:
{
  "answer": "The authentication middleware `verify_firebase_token` is called by three functions: `get_current_user` in `api/dependencies.py:14`, `ingest_github_repository` in `api/routers/ingestion.py:38`, and `embed_repository` in `api/routers/ingestion.py:278`. All three use FastAPI's dependency injection via `Depends(get_current_user)`.",
  "sources": [
    { "path": "api/dependencies.py", "line_number": 14, "name": "get_current_user", "type": "function" },
    { "path": "api/routers/ingestion.py", "line_number": 38, "name": "ingest_github_repository", "type": "function" }
  ]
}

Example questions

The Ask Agent works best for questions about relationships, architecture, and code understanding. Here are some examples of what you can ask:
  • “What embedding model do we use and where is it configured?”
  • “Which modules does the ingestion engine depend on?”
  • “What external APIs does this service call?”
  • “How does the webhook router decide which handler to invoke?”
  • “Which functions call the authentication middleware?”
  • “Where is embed_texts called from?”
  • “What code runs when a GitHub push event is received?”
  • “Show me all callers of the search_code function.”
  • “What is the inheritance hierarchy of the IngestionEngine class?”
  • “Which classes extend BaseModel?”
  • “What methods does CodeSearchService expose?”
  • “What are the most complex functions in the repo?”
  • “Which functions have cyclomatic complexity above 10?”
  • “What would break if I changed the embed_texts function signature?”

Prerequisites

  • The repository must be indexed before the Ask Agent can answer questions about it. See index a repository.
  • Questions that involve semantic similarity — such as “find code related to rate limiting” — require embeddings to be generated. If you see no results for conceptual questions, run the embed endpoint for your repository.
The Ask Agent uses the model configured in your REVIEW_MODEL environment variable for reasoning. More capable models (such as anthropic/claude-sonnet-4-5) produce more accurate and detailed answers but cost more per query. Lighter models are suitable for simple structural questions; use a more capable model when you need the agent to reason across complex multi-hop relationships.