Module 00

Course Introduction & Environment Setup

⏱ ~1 hour ❓ 5-question quiz 🎯 Unlock Module 01

The Lang Ecosystem Map

The "Lang family" is a set of complementary libraries all designed to work together. Understanding how they relate before writing a single line of code will save you from the most common mistake: reaching for the wrong tool.

  ┌─────────────────────────────────────────────────────────────┐
  │                    YOUR APPLICATION                          │
  ├──────────────┬──────────────────┬────────────────────────────┤
  │  UI Layer    │  Serving Layer   │  Observability Layer        │
  │  (Chainlit,  │  (LangServe,     │  (LangSmith, Langfuse)      │
  │  Next.js,    │  FastAPI,        │                             │
  │  Streamlit)  │  LangGraph Plat.)│                             │
  ├──────────────┴──────────────────┴────────────────────────────┤
  │               ORCHESTRATION LAYER                            │
  │   LangGraph — stateful graphs, agents, multi-agent systems   │
  ├──────────────────────────────────────────────────────────────┤
  │               CHAIN / RAG LAYER                              │
  │   LangChain — prompts, LCEL chains, retrievers, tools        │
  ├──────────────────────────────────────────────────────────────┤
  │               FOUNDATION LAYER                               │
  │   LLM APIs (OpenAI, Anthropic, Gemini, local Ollama)         │
  │   Vector Stores  (Chroma, pgvector, Pinecone)                │
  └──────────────────────────────────────────────────────────────┘
🦜
LangChain
The foundational library. Prompts, chains (LCEL), document loaders, retrievers, and tool calling. Think of it as the "plumbing".
🕸️
LangGraph
Stateful, cyclic agent graphs. Built on top of LangChain. Use it whenever you need loops, branching, multi-agent coordination, or HITL.
🔬
LangSmith
Hosted observability, tracing, and evaluation platform from Anthropic. The go-to for teams who want managed infrastructure.
📊
Langfuse
Open-source alternative to LangSmith. Self-host for full data control, GDPR compliance, or zero vendor lock-in.
🚀
LangServe
Turns any LangChain runnable into a REST API instantly. Generates /invoke, /stream, /batch endpoints with zero boilerplate.
ℹ️
LangGraph Platform vs LangServe

LangServe is ideal for simple, mostly stateless chain serving. LangGraph Platform is the evolution for deploying complex stateful agents with built-in checkpointing, streaming, and scaling. You'll learn both — LangServe in Module 14, LangGraph Platform concepts in Module 13.

Installing the Toolchain

All packages live on PyPI. Create a virtual environment first — mixing global packages is the number-one setup mistake.

bash Create & activate virtualenv
python -m venv .venv
source .venv/bin/activate        # macOS / Linux
# .venv\Scripts\activate         # Windows PowerShell
bash Install all course dependencies
pip install \
  langchain==0.3.7 \
  langchain-openai==0.2.7 \
  langchain-anthropic==0.2.4 \
  langchain-community==0.3.7 \
  langchain-text-splitters==0.3.2 \
  langchain-chroma==0.1.4 \
  langgraph==0.2.45 \
  langgraph-checkpoint-sqlite==2.0.3 \
  langsmith==0.1.143 \
  langfuse==2.51.3 \
  langserve[all]==0.3.0 \
  fastapi==0.115.4 \
  uvicorn[standard]==0.32.0 \
  python-dotenv==1.0.1 \
  httpx==0.27.2 \
  pydantic==2.9.2 \
  tenacity==9.0.0
💡
Pin your versions

LangChain moves fast. The version numbers above are tested for this course. Save them to a requirements.txt with pip freeze > requirements.txt so your environment is reproducible across machines and CI.

Environment & API Keys

Never hardcode API keys. Use a .env file locally and your CI/CD secrets manager in production. Create .env in your project root:

env .env — project root
# LLM providers — get keys from platform.openai.com / console.anthropic.com
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

# LangSmith — smith.langchain.com → Settings → API Keys
LANGSMITH_API_KEY=ls__...
LANGCHAIN_TRACING_V2=true
LANGCHAIN_PROJECT=my-enterprise-app

# Langfuse — cloud.langfuse.com or self-hosted
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_HOST=https://cloud.langfuse.com
python Loading env vars in every script
from dotenv import load_dotenv
import os

load_dotenv()  # reads .env from the current directory or any parent

openai_key = os.getenv("OPENAI_API_KEY")
if not openai_key:
    raise RuntimeError("OPENAI_API_KEY is not set — check your .env file")
🚨
Never commit .env to Git

Add .env to your .gitignore immediately. A leaked OpenAI key can rack up thousands of dollars in charges within hours. Use .env.example (with blank values) as the template you commit instead.

Your First LLM Call

With the environment configured, let's make a call to verify everything works. This snippet introduces three core LangChain objects you will use in every module: a model, a prompt, and an output parser.

python hello_llm.py
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

load_dotenv()

# 1. Model — wraps the OpenAI Chat API
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# 2. Prompt — a reusable template with {topic} as a variable
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a concise technical writer."),
    ("human", "Explain {topic} in exactly two sentences."),
])

# 3. Output parser — extracts the text from the response
parser = StrOutputParser()

# 4. Chain — compose with the | pipe (LCEL)
chain = prompt | model | parser

# 5. Invoke
result = chain.invoke({"topic": "RAG (Retrieval Augmented Generation)"})
print(result)
# → RAG augments an LLM's response by first retrieving relevant documents
#   from a knowledge base so the model can ground its answer in real data.

If you see a two-sentence explanation, your environment is working. If you see an AuthenticationError, double-check that OPENAI_API_KEY is set correctly in your .env.

python Smoke-test all services
import os
from dotenv import load_dotenv

load_dotenv()

def check(name: str, value: str | None) -> None:
    status = "✓" if value else "✗ MISSING"
    print(f"  {status}  {name}")

print("Environment check:")
check("OPENAI_API_KEY",       os.getenv("OPENAI_API_KEY"))
check("LANGSMITH_API_KEY",    os.getenv("LANGSMITH_API_KEY"))
check("LANGFUSE_PUBLIC_KEY",  os.getenv("LANGFUSE_PUBLIC_KEY"))
check("LANGFUSE_SECRET_KEY",  os.getenv("LANGFUSE_SECRET_KEY"))

When to Use Which Tool

One of the most common questions in the Lang ecosystem is which tool to reach for. Use this decision guide:

SituationUseWhy
Simple prompt → responseLangChain (LCEL chain)No state needed; linear pipeline is enough
Q&A over documentsLangChain RAG chainLoad, embed, retrieve, generate — all in one pipeline
Multi-step agent with loopsLangGraphNeeds cycles, branching, and mutable state
Multi-agent coordinationLangGraphGraph routing handles agent-to-agent handover natively
Human approval mid-workflowLangGraph (HITL)interrupt_before / interrupt_after checkpoints
Expose chain as REST APILangServeAuto-generates /invoke /stream /batch with one call
Trace & debug in the cloudLangSmithManaged, zero-config tracing with a rich UI
Self-hosted observabilityLangfuseDocker deploy, full data control, GDPR-ready
💡
Start simple, add complexity

Build the simplest thing that works first. Start with a plain LCEL chain. Add LangGraph only when you genuinely need state or loops. Add LangServe when you need an HTTP endpoint. Add LangSmith/Langfuse from day one — observability is always worth it.


📝 Knowledge Check

Module 00 — Quiz

Score 80% or higher (4 out of 5) to unlock Module 01.

0 of 5 answered
← No Previous Module Module 01: LangChain Foundations →