Project Setup

Module 1 · ~10 min read

This topic walks through the Maven pom.xml BOM import, the provider starter dependencies, and the complete application.yml configuration needed to run Power RAG with all three LLM providers.

Spring AI BOM Import

Declare the Spring AI BOM inside <dependencyManagement>. This pins all Spring AI artifact versions so you never have to manage them individually.

pom.xml — dependencyManagement View source ↗
<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.ai</groupId>
      <artifactId>spring-ai-bom</artifactId>
      <version>1.1.2</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

Provider Starters

Add one starter per LLM provider you want to support. No version is needed — the BOM manages them.

pom.xml — provider starters View source ↗
<dependency>
  <groupId>org.springframework.ai</groupId>
  <artifactId>spring-ai-starter-model-anthropic</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.ai</groupId>
  <artifactId>spring-ai-starter-model-google-genai</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.ai</groupId>
  <artifactId>spring-ai-starter-model-ollama</artifactId>
</dependency>

Each starter auto-configures a ChatModel bean for that provider. You then wrap those models in ChatClient beans — see Topic 05.

application.yml Configuration

The full AI-related configuration lives in application.yml. All sensitive values use the ${ENV_VAR:default} pattern — the real value comes from environment variables, with a safe default for local development.

application.yml View source ↗
spring:
  autoconfigure:
    exclude: org.springframework.ai.model.ollama.autoconfigure.OllamaEmbeddingAutoConfiguration
  ai:
    anthropic:
      api-key: ${ANTHROPIC_API_KEY:}
      chat:
        options:
          model: claude-sonnet-4-6
    google:
      genai:
        api-key: ${GOOGLE_API_KEY:}
        chat:
          options:
            model: gemini-2.5-flash
        embedding:
          text:
            options:
              model: gemini-embedding-001
              dimensions: ${powerrag.embedding.dimensions:768}
    ollama:
      base-url: ${OLLAMA_BASE_URL:http://localhost:11434}
      chat:
        options:
          model: qwen2.5-coder:7b
    vectorstore:
      qdrant:
        host: ${QDRANT_HOST:localhost}
        port: ${QDRANT_PORT:6334}
        collection-name: power_rag_docs
        dimensions: ${powerrag.embedding.dimensions:768}

powerrag:
  embedding:
    dimensions: ${POWERRAG_EMBEDDING_DIMENSIONS:768}
  guardrails:
    input-model-id: ${POWERRAG_GUARDRAIL_MODEL:gemini-2.5-flash}
Never commit real API keys. Use environment variables with the ${KEY:default} syntax. In production, inject secrets via your CI/CD pipeline or a secrets manager — never via checked-in configuration files.

Key Configuration Notes

Anthropic (Claude)

Google GenAI (Gemini)

Ollama (Local Models)

Qdrant Vector Store

Set spring.ai.anthropic.api-key to an empty string for local development without a real key — the bean will still be created. If you call it, it will fail fast with a clear auth error rather than silently returning bad results.