Course Introduction & Environment Setup
The Lang Ecosystem Map
The "Lang family" is a set of complementary libraries all designed to work together. Understanding how they relate before writing a single line of code will save you from the most common mistake: reaching for the wrong tool.
┌─────────────────────────────────────────────────────────────┐ │ YOUR APPLICATION │ ├──────────────┬──────────────────┬────────────────────────────┤ │ UI Layer │ Serving Layer │ Observability Layer │ │ (Chainlit, │ (LangServe, │ (LangSmith, Langfuse) │ │ Next.js, │ FastAPI, │ │ │ Streamlit) │ LangGraph Plat.)│ │ ├──────────────┴──────────────────┴────────────────────────────┤ │ ORCHESTRATION LAYER │ │ LangGraph — stateful graphs, agents, multi-agent systems │ ├──────────────────────────────────────────────────────────────┤ │ CHAIN / RAG LAYER │ │ LangChain — prompts, LCEL chains, retrievers, tools │ ├──────────────────────────────────────────────────────────────┤ │ FOUNDATION LAYER │ │ LLM APIs (OpenAI, Anthropic, Gemini, local Ollama) │ │ Vector Stores (Chroma, pgvector, Pinecone) │ └──────────────────────────────────────────────────────────────┘
LangServe is ideal for simple, mostly stateless chain serving. LangGraph Platform is the evolution for deploying complex stateful agents with built-in checkpointing, streaming, and scaling. You'll learn both — LangServe in Module 14, LangGraph Platform concepts in Module 13.
Installing the Toolchain
All packages live on PyPI. Create a virtual environment first — mixing global packages is the number-one setup mistake.
python -m venv .venv
source .venv/bin/activate # macOS / Linux
# .venv\Scripts\activate # Windows PowerShell
pip install \
langchain==0.3.7 \
langchain-openai==0.2.7 \
langchain-anthropic==0.2.4 \
langchain-community==0.3.7 \
langchain-text-splitters==0.3.2 \
langchain-chroma==0.1.4 \
langgraph==0.2.45 \
langgraph-checkpoint-sqlite==2.0.3 \
langsmith==0.1.143 \
langfuse==2.51.3 \
langserve[all]==0.3.0 \
fastapi==0.115.4 \
uvicorn[standard]==0.32.0 \
python-dotenv==1.0.1 \
httpx==0.27.2 \
pydantic==2.9.2 \
tenacity==9.0.0
LangChain moves fast. The version numbers above are tested for this course. Save them to a requirements.txt with pip freeze > requirements.txt so your environment is reproducible across machines and CI.
Environment & API Keys
Never hardcode API keys. Use a .env file locally and your CI/CD secrets manager in production. Create .env in your project root:
# LLM providers — get keys from platform.openai.com / console.anthropic.com
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# LangSmith — smith.langchain.com → Settings → API Keys
LANGSMITH_API_KEY=ls__...
LANGCHAIN_TRACING_V2=true
LANGCHAIN_PROJECT=my-enterprise-app
# Langfuse — cloud.langfuse.com or self-hosted
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_HOST=https://cloud.langfuse.com
from dotenv import load_dotenv
import os
load_dotenv() # reads .env from the current directory or any parent
openai_key = os.getenv("OPENAI_API_KEY")
if not openai_key:
raise RuntimeError("OPENAI_API_KEY is not set — check your .env file")
Add .env to your .gitignore immediately. A leaked OpenAI key can rack up thousands of dollars in charges within hours. Use .env.example (with blank values) as the template you commit instead.
Your First LLM Call
With the environment configured, let's make a call to verify everything works. This snippet introduces three core LangChain objects you will use in every module: a model, a prompt, and an output parser.
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
# 1. Model — wraps the OpenAI Chat API
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# 2. Prompt — a reusable template with {topic} as a variable
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise technical writer."),
("human", "Explain {topic} in exactly two sentences."),
])
# 3. Output parser — extracts the text from the response
parser = StrOutputParser()
# 4. Chain — compose with the | pipe (LCEL)
chain = prompt | model | parser
# 5. Invoke
result = chain.invoke({"topic": "RAG (Retrieval Augmented Generation)"})
print(result)
# → RAG augments an LLM's response by first retrieving relevant documents
# from a knowledge base so the model can ground its answer in real data.
If you see a two-sentence explanation, your environment is working. If you see an AuthenticationError, double-check that OPENAI_API_KEY is set correctly in your .env.
import os
from dotenv import load_dotenv
load_dotenv()
def check(name: str, value: str | None) -> None:
status = "✓" if value else "✗ MISSING"
print(f" {status} {name}")
print("Environment check:")
check("OPENAI_API_KEY", os.getenv("OPENAI_API_KEY"))
check("LANGSMITH_API_KEY", os.getenv("LANGSMITH_API_KEY"))
check("LANGFUSE_PUBLIC_KEY", os.getenv("LANGFUSE_PUBLIC_KEY"))
check("LANGFUSE_SECRET_KEY", os.getenv("LANGFUSE_SECRET_KEY"))
When to Use Which Tool
One of the most common questions in the Lang ecosystem is which tool to reach for. Use this decision guide:
| Situation | Use | Why |
|---|---|---|
| Simple prompt → response | LangChain (LCEL chain) | No state needed; linear pipeline is enough |
| Q&A over documents | LangChain RAG chain | Load, embed, retrieve, generate — all in one pipeline |
| Multi-step agent with loops | LangGraph | Needs cycles, branching, and mutable state |
| Multi-agent coordination | LangGraph | Graph routing handles agent-to-agent handover natively |
| Human approval mid-workflow | LangGraph (HITL) | interrupt_before / interrupt_after checkpoints |
| Expose chain as REST API | LangServe | Auto-generates /invoke /stream /batch with one call |
| Trace & debug in the cloud | LangSmith | Managed, zero-config tracing with a rich UI |
| Self-hosted observability | Langfuse | Docker deploy, full data control, GDPR-ready |
Build the simplest thing that works first. Start with a plain LCEL chain. Add LangGraph only when you genuinely need state or loops. Add LangServe when you need an HTTP endpoint. Add LangSmith/Langfuse from day one — observability is always worth it.