System Prompts and Prompt Templates

Module 2 · ~10 min read
A system prompt sets the LLM's persona, capabilities, and rules for the entire conversation. In RAG, it is also where you tell the model how to use the retrieved context: whether to cite sources, which language to respond in, and how to handle the case where no relevant documents were found.

What a System Prompt Is

Every LLM chat API distinguishes between three message roles:

The system message is the highest-priority instruction. It shapes how the model responds to every subsequent user message.

defaultSystem() vs .system()

MethodWhen it appliesUse case
defaultSystem(text) Set once at bean creation via ChatClient.builder() Persistent persona/rules that apply to every call — SYSTEM_PREAMBLE in Power RAG
.system(text) Set per-request in the fluent chain Override the system prompt for a specific call without rebuilding the bean

MultilingualPromptBuilder

The user message in Power RAG is not just the raw question. It is a structured prompt containing retrieved context, the question, and language/citation instructions — all assembled by MultilingualPromptBuilder.

MultilingualPromptBuilder.java — buildUserMessage() View source ↗
public String buildUserMessage(String question, String context,
                               String language, boolean hasRelevantContext,
                               boolean imagePresent) {
    String imageInstruction = imagePresent
        ? "An image has been attached. Carefully examine it...\n\n"
        : "";

    if (!hasRelevantContext || context.isBlank()) {
        return imageInstruction +
               "The knowledge base does not contain relevant documents. " +
               "Answer using your general knowledge.\n\n" +
               "Question: " + question + "\n\n" + langInstruction;
    }
    return imageInstruction + context +
           "\n\nQuestion: " + question +
           "\n\nPrimarily answer using the sources above, citing [SOURCE N]..." +
           langInstruction;
}

Prompt Structure in Practice

The final user message sent to the LLM looks like this when context is available:

[USER MESSAGE] [SOURCE 1] report.pdf § Executive Summary The Q3 revenue increased by 15% year-over-year... [SOURCE 2] policy.docx § Section 3 All employees must complete annual compliance training... Question: What was Q3 revenue growth? Primarily answer using the sources above, citing [SOURCE N] inline. Respond in: English

And this when no relevant context was found:

[USER MESSAGE] The knowledge base does not contain relevant documents. Answer using your general knowledge. Question: What is the capital of France? Respond in: English
Structured prompts with clear sections (context block, question, instructions) consistently produce better source citations and fewer hallucinations than unstructured prompts. The model can clearly identify which part is the evidence and which part is the question.

Language Handling

MultilingualPromptBuilder appends a language instruction at the end of every user message. The language is detected from the request (or defaulted to English) and passed through the entire pipeline so that: