MCP Tool Servers

Module 10 · ~25 min read

Power RAG ships two MCP servers: a custom Python server that covers web fetching, time/weather, Jira, GitHub, and Google Cloud Logging; and a pre-compiled email server for IMAP/SMTP operations. This topic walks through how each is implemented, configured, and secured.

Server Inventory

ServerTypeLocationTools
powerrag-tools Python stdio backend/mcp/powerrag_mcp_tools.py fetch_url, get_current_time, get_weather, jira_*, github_*, gcp_logging_query
boutquin-email Binary stdio backend/mcp/bin/mcp-server-email email_list, email_search, email_get, email_reply, email_send, email_draft_create

The Python MCP Server

The Python server is a single file that uses the mcp SDK to declare tools and handle JSON-RPC calls over stdin/stdout. Spring AI spawns it as a child process; no separate port or service is needed in development.

backend/mcp/requirements.txt — Python dependencies View source ↗
mcp>=1.2.0
httpx>=0.27.0
google-auth[requests]>=2.29.0
Tip: Install dependencies once: pip install -r backend/mcp/requirements.txt. The Spring AI dev-profile auto-starts the server when the backend starts, so you never need to run the Python script directly.

Tool: fetch_url

The most general-purpose tool. It fetches any HTTP or HTTPS URL and returns structured JSON that the LLM can read directly. It includes smart content extraction: JSON responses are pretty-printed, HTML is stripped to plain text, and errors produce a structured {"ok": false, "error": "..."} object so the LLM can handle failures gracefully rather than parsing a raw exception.

fetch_url — response shape (JSON returned to LLM)
{
  "ok": true,
  "url": "https://example.com/data.json",
  "status_code": 200,
  "content_type": "application/json",
  "text": "{ \"key\": \"value\" }"
}

// On failure:
{
  "ok": false,
  "url": "https://example.com/missing",
  "status_code": 404,
  "error": "Not Found",
  "hint": "Try encoding special characters in the URL path"
}
Key concept: Always return structured JSON from tool functions rather than plain text strings. The LLM can parse JSON fields reliably; it cannot reliably extract information from prose error messages. A well-structured error payload lets the model decide whether to retry, rephrase, or report the failure to the user.

Tools: get_current_time & get_weather

Time and weather are the simplest live-data cases: they have no API key requirement and are pure lookups with no side effects.

ToolParametersAPI usedKey required?
get_current_time timezone (IANA, e.g. Asia/Singapore) Python datetime + zoneinfo No
get_weather location, units (metric/imperial) Open-Meteo (free, no key needed) No

Jira Cloud Tools

Two Jira tools cover the most common support-desk workflows: searching issues with JQL and loading a single issue with its comments.

ToolKey parametersWhat it returns
jira_search_issues jql (default: project = KAN ORDER BY created DESC), max_results Issue keys, summaries, statuses, assignees as JSON array
jira_get_issue issue_key (e.g. KAN-5) Full issue: description, status, priority, reporter, comments

Credentials are read from environment variables at server startup:

Environment variables for Jira tools
# Required
JIRA_CLOUD_EMAIL=you@example.com
JIRA_CLOUD_API_TOKEN=your-atlassian-api-token

# Optional (defaults to your-domain.atlassian.net)
JIRA_CLOUD_BASE_URL=https://your-domain.atlassian.net

The Jira output includes a presentation_hint field that instructs the LLM to format issue lists with one issue per line — preventing the model from accidentally concatenating issue keys (e.g. writing KAN-5KAN-4 instead of KAN-5, KAN-4).

GitHub Tools

ToolKey parametersWhat it returns
github_search_code query (GitHub code search syntax), per_page File paths, repository names, and code snippet previews
github_get_repository_content owner, repo, path, ref (branch/SHA) File contents (decoded from base64) or directory listing
Environment variable for GitHub tools
# Optional — provides higher rate limits and access to private repos
GITHUB_TOKEN=ghp_your_personal_access_token

Without a token the GitHub API allows about 60 unauthenticated requests per hour. For development on a private repo you will need a personal access token with repo scope.

Google Cloud Logging Tool

The gcp_logging_query tool queries Cloud Logging using the same filter syntax as the Google Cloud Console Logs Explorer. This lets users ask natural-language questions about production log data directly in the chat interface.

Example Cloud Logging filter (LLM constructs this from user question)
resource.type="cloud_run_revision"
severity>=ERROR
timestamp>="2026-03-30T00:00:00Z"
Environment variables for GCP Logging
# One of:
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
# or Application Default Credentials (gcloud auth application-default login)

GCP_PROJECT_ID=your-gcp-project-id

Email Server (boutquin-email)

The email server is a pre-compiled binary (available for macOS arm64, Linux amd64) that implements the MCP protocol over IMAP/SMTP. It requires an email-accounts.json configuration file that maps account names to connection details.

backend/mcp/email-accounts.docker-stub.json — example stub structure
{
  "accounts": [
    {
      "name": "work",
      "imap": {
        "host": "imap.gmail.com",
        "port": 993,
        "username": "you@gmail.com",
        "password": "app-specific-password"
      },
      "smtp": {
        "host": "smtp.gmail.com",
        "port": 587,
        "username": "you@gmail.com",
        "password": "app-specific-password"
      }
    }
  ]
}

The email tools are deliberately not enabled by default in the stub configuration. Copy and rename email-accounts.docker-stub.json to email-accounts.json and fill in real credentials to enable them.

Tool Output Normalisation

Spring AI's Java MCP bridge occasionally wraps tool output in a TextContent object whose toString() produces a string like:

TextContent[annotations=null, text={...actual JSON...}, meta=null]

If this string is handed to a Gemini model as a tool result, Gemini rejects it with "Failed to parse JSON". ObservingToolCallback detects and strips this wrapper before the string reaches the model:

backend/src/main/java/com/powerrag/mcp/ObservingToolCallback.java — normalizeMcpToolOutput View source ↗
static String normalizeMcpToolOutput(String out) {
    if (out == null || out.isEmpty()) return out;
    String t = out.strip();
    if (!t.startsWith("TextContent[")) return out;   // most calls take this path
    int te = t.indexOf("text=");
    if (te < 0) return out;
    int valueStart = te + "text=".length();
    int meta = t.lastIndexOf(", meta=");
    if (meta < valueStart) return out;
    return t.substring(valueStart, meta);            // extracted JSON payload
}
Key concept: Normalisation happens inside ObservingToolCallback.invoke() — immediately after the raw output is received from the delegate, before it is recorded or returned to the model. This means all downstream code (the LLM, the invocation recorder, the audit log) always sees clean JSON.

Security Model

All tool calls run in the backend process with server-side credentials. No end-user tokens are ever exposed in the browser. The trust boundary is the Spring Security JWT layer — authenticated users can trigger tool calls, but cannot directly configure or bypass credential resolution.

ResourceCredential locationScope
Jira Cloud Environment variables on backend container Read all issues in the configured project
GitHub Optional env var (GITHUB_TOKEN) Public repos without token; repo scope for private
GCP Logging ADC or GOOGLE_APPLICATION_CREDENTIALS file Logging read on the configured project
Email email-accounts.json (not committed) IMAP read + SMTP send for the configured account
Web URLs None (public HTTP) Any public URL; no intranet access by default
Warning: The fetch_url tool can reach any URL accessible from the backend host, including internal services on the same network. In production, apply network egress rules or an allowlist to prevent server-side request forgery (SSRF) via the chat interface.