Business Apps

Build a Customer Support Chatbot with LangChain: Exact Python Script

Build a Customer Support Chatbot with LangChain: Exact Python Script

To build a customer support chatbot with LangChain, you need a minimal, production-ready pipeline that handles context, routing, and fallbacks without theoretical overhead. This guide delivers an exact, runnable Python script using LangChain Expression Language (LCEL), local session memory, and strict error handling. Follow the steps below to deploy immediately.

Prerequisites & Environment Configuration

Install the exact dependency stack and configure your API key. Skip virtual environment setup if already active.

pip install langchain langchain-openai langchain-community python-dotenv
touch .env && echo 'OPENAI_API_KEY=your_key_here' > .env

Validate the key loads correctly before proceeding: python -c "import os; from dotenv import load_dotenv; load_dotenv(); print(os.getenv('OPENAI_API_KEY')[:5])".

Core Pipeline Architecture (LCEL)

The LCEL pipeline chains components deterministically: ChatPromptTemplateChatModelRunnableWithMessageHistoryStrOutputParser. This sequence guarantees that system instructions, conversation state, and model inference execute in a single, composable graph. This modular structure scales seamlessly when integrating into broader Building AI-Powered Business Applications workflows, allowing you to swap components without rewriting the orchestration layer.

Minimal Viable Chatbot Script

The script below enforces a strict support tone, maintains per-session memory, and catches API failures gracefully.

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.chat_history import InMemoryChatMessageHistory
from openai import OpenAIError, RateLimitError

load_dotenv()

SYSTEM_PROMPT = """You are a Tier-1 customer support agent.
- Maintain a professional, empathetic, and solution-oriented tone.
- If context is missing, ask clarifying questions. Never hallucinate policies.
- Keep responses under 3 sentences unless troubleshooting steps are required."""

prompt = ChatPromptTemplate.from_messages([
 ("system", SYSTEM_PROMPT),
 MessagesPlaceholder(variable_name="history"),
 ("human", "{input}")
])

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.2)
parser = StrOutputParser()

# Compile chain
chain = prompt | llm | parser

# In-memory session store
store = {}
def get_session_history(session_id: str):
 if session_id not in store:
 store[session_id] = InMemoryChatMessageHistory()
 return store[session_id]

chatbot = RunnableWithMessageHistory(
 chain,
 get_session_history,
 input_messages_key="input",
 history_messages_key="history"
)

def run_support_session(user_input: str, session_id: str = "default"):
 try:
 return chatbot.invoke(
 {"input": user_input},
 config={"configurable": {"session_id": session_id}}
 )
 except RateLimitError:
 return "System is experiencing high traffic. Please retry in 30 seconds."
 except OpenAIError as e:
 return f"Service error: {str(e)}"

Extending this base with vector stores or CRM hooks is covered in advanced Custom AI Chatbot Development workflows, where you can swap the static prompt for dynamic RAG retrieval.

Execution & Interactive Testing Loop

Run the CLI loop to validate context retention and error handling.

if __name__ == "__main__":
 session_id = "support_session_01"
 print("Support Chatbot Active. Type 'quit' to exit.")
 
 while True:
 user_msg = input("\nCustomer: ").strip()
 if user_msg.lower() in ["quit", "exit"]:
 print("Session closed.")
 break
 if not user_msg:
 continue
 
 print(f"\nAgent: {run_support_session(user_msg, session_id)}")

Production Deployment Checklist

Wrap the script for web deployment, enforce validation, and route conversations deterministically.

  1. FastAPI Route & Validation: Use Pydantic models to reject malformed payloads.
  2. Conversation ID Routing: Pass session_id via headers or query params. Map to Redis or PostgreSQL for persistence.
  3. Structured Logging: Emit JSON logs for request_id, latency_ms, and error_code.
  4. Rate Limiting & Fallbacks: Implement token bucket limits. Route fallbacks to a secondary model or human queue.
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import logging

app = FastAPI()
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")

class ChatRequest(BaseModel):
 message: str
 session_id: str

class ChatResponse(BaseModel):
 reply: str
 status: str

@app.post("/chat", response_model=ChatResponse)
async def handle_chat(req: ChatRequest):
 logging.info(f"Processing session {req.session_id}")
 reply = run_support_session(req.message, req.session_id)
 return ChatResponse(reply=reply, status="success")

Deploying this skeleton aligns with standard SaaS MVP with Python & AI patterns. For enterprise scaling, integrate CRM Data Integration pipelines to inject ticket history and customer metadata directly into the MessagesPlaceholder context.

Conclusion

This exact pipeline provides a reliable foundation to build a customer support chatbot with LangChain. Deploy locally, validate routing, and scale to production using the checklist above.