Introduction
Ever had a group chat where one person is talking about quantum physics, another is sharing a secret pasta recipe, and someone else suddenly drops a dance tutorial? It’s chaos! 🚀🍝🕺
Now, imagine trying to build a chatbot that handles multiple discussions in one thread—math equations tangled with cooking tips, customer queries mixed with random jokes. That’s a recipe for confusion.
That’s exactly why multi-chat sessions exist! Instead of cramming everything into one messy conversation, we create separate chat sessions to keep topics organized and contextually relevant. Whether you're building a customer support bot, an AI tutor, or a team collaboration tool, structuring chats properly ensures a smooth and engaging user experience.
In this tutorial, we’ll walk through the why, what, and how of implementing multi-chat sessions, so your chatbot doesn’t start recommending spaghetti recipes in the middle of a coding discussion.
By the end of this blog, you'll know how to:
✅ Maintain multiple chat sessions without losing context.
✅ Assign unique session IDs to keep conversations separate.
✅ Manage and retrieve chat histories efficiently.
Here’s a sneak peek at what you’ll create by the end of this blog! 🌟
What are Multiple Chat Sessions
Multiple chat sessions mean that every user (or conversation) gets their own separate space—like having individual tables in that coffee shop instead of cramming onto one giant table. Each session has its own unique ID, ensuring that messages don’t mix up like a badly shuffled deck of cards. This setup keeps conversations organized, contextually relevant, and easy to manage.
Why Does This Matter?
✅ Context Retention – Each session remembers its past messages, so you don’t have to explain yourself again. Imagine asking an AI assistant about JavaScript, and in the next message, it starts explaining how to bake a chocolate cake. Not ideal, right?
✅ Better User Experience – Users expect seamless, independent conversations. You wouldn’t want your customer support chat getting mixed up with your AI-generated bedtime stories.
✅ Scalability & Efficiency – Whether it's a chatbot for business support, an AI-powered tutor, or a virtual assistant, managing multiple chat sessions allows for parallel, uninterrupted interactions—making your application smoother and smarter.
How Does It Work?
Each chat session gets a unique identifier (session ID). Whenever a user interacts with your chatbot, the system retrieves the messages from the relevant session instead of pulling up a random conversation. It’s like having separate notebooks for different subjects instead of scribbling everything on a single page.
How to Implement Multiple Chat Sessions with Memory using LangChain Classes
Now that we know why multiple chat sessions matter, let's roll up our sleeves and implement them using LangChain! 💻🚀
To keep conversations separate, we need to store chat histories in a structured way. Here's a simple function to retrieve chat history based on a unique session_id
:
from langchain.memory import ChatMessageHistory
# Function to Get Chat History
def get_session_history(session_id: str, chat_sessions: dict, chat_titles: dict) -> ChatMessageHistory:
# Check if the session already exists in the chat_sessions dictionary
if session_id not in chat_sessions:
# If it doesn't exist, initialize a new ChatMessageHistory object for the session
chat_sessions[session_id] = ChatMessageHistory()
chat_titles[session_id] = "New Chat" # Set a default title for the new session
# Return the ChatMessageHistory object for the given session_id
return chat_sessions[session_id]
We are now using the
ChatMessageHistory
class provided by LangChain to store and manage the chat history. This class allows us to store messages and easily manage the state of the conversation.As before, we check if the
session_id
already exists in thechat_sessions
dictionary. If it doesn't, we create a newChatMessageHistory
object, which will store the chat messages for that particular session.The function returns the
ChatMessageHistory
object for the specified session, allowing you to interact with and retrieve the history of that session as needed.This implementation ensures that each conversation has its own dedicated memory (in the form of
ChatMessageHistory
). By usingChatMessageHistory()
, we can easily append new messages to the session’s history and retrieve them whenever required.
Implementing a ChatBot with Multiple Chat Sessions
Prerequisite
This article builds on the concepts from the first blog. Before proceeding, ensure you've covered the previous blogs. We’ll pick up where we left off and make adjustments to the
app.py
file to upgrade our chatbot’s memory. 🧠✨
Step 4: Creating the Main Working File → app.py
1.1 Importing Modules
import os # For accessing environment variables
import time # For handling time delays
import streamlit as st # For building interactive web apps
from dotenv import load_dotenv # For loading environment variables from a .env file
from operator import itemgetter # For accessing items from collections
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage # For defining message types in chat
from langchain_core.chat_history import BaseChatMessageHistory # For managing chat history
from langchain_community.chat_message_histories import ChatMessageHistory # For storing chat messages in memory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder # For creating and using chat prompts
from langchain_core.runnables import RunnablePassthrough # For processing data through a sequence of steps
from langchain_groq import ChatGroq # For interacting with the Groq AI model
1.2 Loading the Environment Variables and Setting Up the Model
load_dotenv() # Load .env variables
groq_api_key = os.getenv("GROQ_API_KEY") # Retrieve API key
model = ChatGroq(model="Gemma2-9b-It", groq_api_key=groq_api_key) # Initialize the model
- This is the same setup as the previous chatbot. Ensure your
.env
file has the correctGROQ_API_KEY
.
1.3 Defining the Prompt Template & Creating the Processing Chain
# Define the main prompt template for the chatbot
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages"), # Placeholder for dynamic user and AI messages
])
# Define the prompt for generating a title based on the first user message
title_prompt = ChatPromptTemplate.from_messages([
("system", "Generate a short and relevant title for a conversation based on the given message."),
("human", "{message}"), # User's first message used for title generation
])
# Define the processing chain: extract messages, apply prompt, and send to model
chain = (
RunnablePassthrough.assign(messages=itemgetter("messages")) # Extracts messages dynamically
| prompt # Formats the input using the defined prompt template
| model # Passes the formatted data to the model for generating a response
)
Prompt Template:
ChatPromptTemplate.from_messages([...])
sets the assistant’s behavior.The system message provides context, ensuring helpful and accurate responses.
MessagesPlaceholder(variable_name="messages")
dynamically injects chat history.
Title Generation Prompt:
- Used to generate a short, meaningful title for each chat session based on the user's first message.
Processing Chain:
RunnablePassthrough.assign(messages=itemgetter("messages"))
extracts messages dynamically.The
|
operator creates a streamlined pipeline:- Extract chat messages → Apply formatting via prompt → Send input to model
1.4 Initializing Chat Sessions and Managing Chat History
# Initialize chat sessions in session state (persistent across the app)
if "chat_sessions" not in st.session_state:
st.session_state.chat_sessions = {"Chat 1": ChatMessageHistory()} # Default first chat session
st.session_state.current_chat = "Chat 1" # Set default active chat session
st.session_state.chat_titles = {"Chat 1": "New Chat"} # Default title for the chat
# Function to get chat history for a selected session
def get_session_history(session_id: str) -> BaseChatMessageHistory:
# If the session does not exist, create a new chat session and initialize its history
if session_id not in st.session_state.chat_sessions:
st.session_state.chat_sessions[session_id] = ChatMessageHistory()
st.session_state.chat_titles[session_id] = "New Chat"
return st.session_state.chat_sessions[session_id] # Return the chat history of the session
Session Initialization:
Ensures chat history persists across different interactions.
Uses Streamlit’s
session_state
to store multiple chat sessions.Default chat session ("Chat 1") is created with an empty message history.
Managing Chat History:
get_session_history(session_id: str)
: Retrieves the chat history for a specific session.If a new chat session is accessed, it automatically initializes a fresh history and title.
Why This Matters?
This setup allows seamless switching between multiple chats, just like a messaging app.
Each chat session maintains its own history, so conversations remain independent. 🚀
1.5 Creating a Sidebar for Chat Management
# Sidebar for managing multiple chat sessions
with st.sidebar:
st.header("Chats") # Sidebar title
# Retrieve available chat sessions
chat_keys = list(st.session_state.chat_sessions.keys())
for chat_id in chat_keys:
col1, col2 = st.columns([4, 1]) # Column layout: Chat title (col1) and delete button (col2)
with col1:
# Highlight active chat session
if st.session_state.current_chat == chat_id:
st.markdown(f"**{st.session_state.chat_titles[chat_id]}**") # Bold for active chat
else:
# Button to switch between chat sessions
if st.button(st.session_state.chat_titles[chat_id], key=chat_id):
st.session_state.current_chat = chat_id # Set selected chat as active
st.rerun() # Refresh UI
with col2:
# Delete button to remove a chat session
if st.button("❌", key=f"delete_{chat_id}"):
del st.session_state.chat_sessions[chat_id] # Remove chat history
del st.session_state.chat_titles[chat_id] # Remove associated title
# If the deleted chat was active, switch to another session or create a new one
if st.session_state.current_chat == chat_id:
if st.session_state.chat_sessions:
st.session_state.current_chat = next(iter(st.session_state.chat_sessions)) # Select first chat
else:
# Create a new default chat if no sessions remain
st.session_state.chat_sessions["Chat 1"] = ChatMessageHistory()
st.session_state.chat_titles["Chat 1"] = "New Chat"
st.session_state.current_chat = "Chat 1"
st.rerun() # Refresh UI after deletion
# Button to create a new chat session
if st.button("➕ New Chat"):
new_chat_id = f"Chat {len(st.session_state.chat_sessions) + 1}" # Generate unique chat ID
st.session_state.chat_sessions[new_chat_id] = ChatMessageHistory() # Initialize new chat history
st.session_state.chat_titles[new_chat_id] = "New Chat" # Assign default title
st.session_state.current_chat = new_chat_id # Set new chat as active
st.rerun() # Refresh UI
Sidebar Layout:
Displays a list of existing chat sessions.
Each chat session has a title and a delete button.
Chat Selection:
Clicking a chat title switches to that chat (
current_chat
).The UI refreshes (
st.rerun()
) to reflect the change.
Chat Deletion:
Clicking the ❌ button removes the chat session and its title.
If the deleted chat was active, the app:
Switches to another session if available.
Creates a new default chat if none remain.
New Chat Creation:
Clicking "➕ New Chat" generates a new unique chat ID.
The session is initialized and set as active.
The UI updates to reflect the new session.
1.5 Handling User Input and Model Response
# Main Chat UI
st.title("Multi-Chat AI Assistant 🧠") # Set the title of the web app
# Retrieve current chat history for the active chat session
chat_history = st.session_state.chat_sessions[st.session_state.current_chat]
# Display previous chat messages (either from user or AI)
for message in chat_history.messages:
with st.chat_message("user" if isinstance(message, HumanMessage) else "assistant"):
st.markdown(message.content) # Display each message, user or assistant
# Input field for user message
if prompt_text := st.chat_input("Enter your message..."): # Wait for user input
st.chat_message("user").markdown(prompt_text) # Display the user's message immediately
user_message = HumanMessage(content=prompt_text) # Create a HumanMessage object for the input text
chat_history.add_message(user_message) # Add the user's message to the chat history
# Prepare messages for AI response: include the system message and chat history
messages = [
SystemMessage(content="You are a helpful assistant. Answer all questions to the best of your ability."),
*chat_history.messages # Include all previous messages to maintain context
]
# Invoke the chain to get AI response
response = chain.invoke({"messages": messages})
response_text = response.content # Extract the AI response from the chain
Setting the Title:
- The title "Multi-Chat AI Assistant 🧠" is displayed at the top of the interface using
st.title()
. It's a simple way to give the app an engaging header.
- The title "Multi-Chat AI Assistant 🧠" is displayed at the top of the interface using
Fetching the Current Chat History:
- The chat history for the current active session is fetched from Streamlit’s session state using
st.session_state.chat_sessions[st.session_state.current_chat]
. This allows the assistant to maintain context throughout the conversation, tracking all previous messages.
- The chat history for the current active session is fetched from Streamlit’s session state using
Displaying Previous Messages:
All messages in the chat history are displayed:
If the message is from the user, it's displayed as a user message.
If the message is from the assistant, it’s shown as an assistant message.
st.chat_message()
is used to render the messages with proper formatting (like chat bubbles), andst.markdown()
is used to display the actual message content.
Handling User Input:
st.chat_input("Enter your message...")
is the text box where users type their messages.When a message is entered, it is immediately displayed in the chat using
st.chat_message("user")
, so users can see what they've typed before the AI responds.The message is then converted into a
HumanMessage
object and added to the chat history to keep track of all interactions.
Preparing AI Response:
The system message ("You are a helpful assistant...") is included to provide context to the assistant.
The entire chat history (including the user’s new message) is passed into the AI model to maintain the conversation's context.
The chain (which processes the messages and invokes the model) is used to get the AI’s response.
In short: This code handles chat interaction by displaying previous messages, allowing user input, adding the user's message to history, and fetching a response from the AI model based on the complete chat history.
1.6 Rendering Chatbot’s Response
# Function for word-by-word response animation
def response_generator(text):
for word in text.split():
yield word + " " # Yield each word with a space
time.sleep(0.05) # Simulate typing delay (0.05 seconds per word)
# Display AI response with animation (word-by-word)
with st.chat_message("assistant"): # Start displaying the assistant's message
response_placeholder = st.empty() # Create an empty placeholder for the response animation
response = "" # Initialize an empty string to build the response progressively
for partial_response in response_generator(response_text): # Loop through the word-by-word generator
response += partial_response # Append each word to the response
response_placeholder.markdown(response) # Update the displayed response in the placeholder
response_placeholder.markdown(response_text) # Final display of the complete response once animation finishes
# Add AI response to the chat history
ai_message = AIMessage(content=response_text) # Create a new AIMessage object
chat_history.add_message(ai_message) # Add the AI's response to the chat history
response_generator
function:The function takes the AI response text, splits it into individual words, and simulates typing by adding a delay of 0.05 seconds between each word. This delay makes it appear as if the assistant is typing the message progressively.
The
yield
keyword allows the function to return each word one by one, rather than all at once.
Displaying the AI response with animation:
st.chat
_message("assistant")
: This marks the start of the assistant's response in the chat interface.st.empty()
: A placeholder is created to hold the animated response while it updates progressively.As each word is generated by the
response_generator
, it’s appended to the response string and displayed in the placeholder usingresponse_placeholder.markdown(response)
.Once the animation is complete, the full response is displayed by calling
response_placeholder.markdown(response_text)
.
Adding the AI response to chat history:
- After the animation is finished, the full response text is added to the chat history using
chat_history.add_message(ai_message)
. This allows the conversation history to be preserved for future interactions.
- After the animation is finished, the full response text is added to the chat history using
1.7 Generating and Setting the Chat Title Based on User's First Message
# Generate and set the chat title based on the first user message (if it's a new chat)
if st.session_state.chat_titles[st.session_state.current_chat] == "New Chat": # Check if it's a new chat
title_response = model.invoke([ # Use the model to generate a title
SystemMessage(content="Generate a short and relevant title for a conversation based on the given message. JUST GIVE ONE OR TWO RELEVANT WORDS"),
HumanMessage(content=prompt_text) # Pass the user's message to the model for title generation
])
generated_title = title_response.content.strip() if title_response else "Chat" # Extract and clean the generated title
st.session_state.chat_titles[st.session_state.current_chat] = generated_title # Set the generated title for the chat
st.rerun() # Refresh the UI to display the new title
Check if it's a new chat:
if st.session_
state.chat
_titles[st.session_state.current_chat] == "New Chat":
: The code checks if the current chat is marked as "New Chat" in the session state. This is the condition for generating a title only for new conversations.
Generate a title using the model:
The model is invoked to generate a short, relevant title based on the user's first message (
prompt_text
).A
SystemMessage
is passed to the model, instructing it to generate one or two relevant words that summarize the conversation.The user's message (
prompt_text
) is passed as aHumanMessage
to provide context for the title generation.
Set the generated title:
Once the model generates the title,
title_response.content.strip()
is used to clean the response by removing any unwanted whitespace.If no response is received, the title is set to "Chat" by default.
The generated title is stored in
st.session_
state.chat
_titles[st.session_state.current_chat]
to associate it with the current chat.
Refresh the UI:
st.rerun()
is called to refresh the Streamlit interface and display the newly generated title on the UI immediately.
Complete Code
import os
import time
import streamlit as st
from dotenv import load_dotenv
from operator import itemgetter
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough
from langchain_groq import ChatGroq
# Load API Key from .env file
load_dotenv()
groq_api_key = os.getenv("Chatbots")
# Initialize Model with API Key
model = ChatGroq(model="Gemma2-9b-It", groq_api_key=groq_api_key)
# Define the main prompt template for the chatbot
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages"),
])
# Define the prompt for generating a title for the chat session based on user input
title_prompt = ChatPromptTemplate.from_messages([
("system", "Generate a short and relevant title for a conversation based on the given message."),
("human", "{message}"),
])
# Define the processing chain: applying the prompt, running the model, and passing the response
chain = (RunnablePassthrough.assign(messages=itemgetter("messages")) | prompt | model)
# Initialize Chat Sessions in session state (persistent across the app)
if "chat_sessions" not in st.session_state:
st.session_state.chat_sessions = {"Chat 1": ChatMessageHistory()} # Default first chat session
st.session_state.current_chat = "Chat 1" # Set default active chat to Chat 1
st.session_state.chat_titles = {"Chat 1": "New Chat"} # Set a default title for Chat 1
# Function to Get Chat History for the selected session
def get_session_history(session_id: str) -> BaseChatMessageHistory:
# If the session doesn't exist, create a new chat session and initialize its history
if session_id not in st.session_state.chat_sessions:
st.session_state.chat_sessions[session_id] = ChatMessageHistory()
st.session_state.chat_titles[session_id] = "New Chat"
return st.session_state.chat_sessions[session_id]
# Sidebar for Chat Management
with st.sidebar:
st.header("Chats") # Title for the sidebar section
# List of all chat sessions
chat_keys = list(st.session_state.chat_sessions.keys())
# Display each chat session with its title and delete option
for chat_id in chat_keys:
col1, col2 = st.columns([4, 1]) # Layout with two columns: one for title, one for delete button
with col1:
# Highlight the current chat session
if st.session_state.current_chat == chat_id:
st.markdown(f"**{st.session_state.chat_titles[chat_id]}**") # Bold current chat title
else:
# Button to switch between chat sessions
if st.button(st.session_state.chat_titles[chat_id], key=chat_id):
st.session_state.current_chat = chat_id # Set selected chat as the current one
st.rerun() # Refresh the page to show the new chat
with col2:
# Button to delete a chat session
if st.button("❌", key=f"delete_{chat_id}"):
del st.session_state.chat_sessions[chat_id] # Remove the chat session
del st.session_state.chat_titles[chat_id] # Remove the chat title
# If deleted chat was the current chat, switch to another one or create a new one
if st.session_state.current_chat == chat_id:
if st.session_state.chat_sessions:
st.session_state.current_chat = next(iter(st.session_state.chat_sessions)) # Select first chat
else:
# If no chats remain, create a new default chat
st.session_state.chat_sessions["Chat 1"] = ChatMessageHistory()
st.session_state.chat_titles["Chat 1"] = "New Chat"
st.session_state.current_chat = "Chat 1"
st.rerun() # Refresh UI after deletion
# Button to create a new chat session
if st.button("➕ New Chat"):
new_chat_id = f"Chat {len(st.session_state.chat_sessions) + 1}" # Generate a new unique chat ID
st.session_state.chat_sessions[new_chat_id] = ChatMessageHistory() # Initialize the new chat history
st.session_state.chat_titles[new_chat_id] = "New Chat" # Set a default title for the new chat
st.session_state.current_chat = new_chat_id # Set the new chat as the current chat
st.rerun() # Refresh to update the UI with the new chat
# Main Chat UI
st.title("Multi-Chat AI Assistant 🧠")
# Retrieve current chat history for the active chat session
chat_history = st.session_state.chat_sessions[st.session_state.current_chat]
# Display previous chat messages (either from user or AI)
for message in chat_history.messages:
with st.chat_message("user" if isinstance(message, HumanMessage) else "assistant"):
st.markdown(message.content)
# Input field for user message
if prompt_text := st.chat_input("Enter your message..."):
# Display user's message in the chat
st.chat_message("user").markdown(prompt_text)
user_message = HumanMessage(content=prompt_text) # Create a HumanMessage object
chat_history.add_message(user_message) # Add the user's message to the chat history
# Prepare messages for the AI response, including system message and chat history
messages = [
SystemMessage(content="You are a helpful assistant. Answer all questions to the best of your ability."),
*chat_history.messages # Include the full chat history
]
# Invoke the chain to get AI response
response = chain.invoke({"messages": messages})
response_text = response.content # Extract the AI response
# Function for word-by-word response animation
def response_generator(text):
for word in text.split():
yield word + " "
time.sleep(0.05) # Simulate typing delay
# Display AI response with animation (word-by-word)
with st.chat_message("assistant"):
response_placeholder = st.empty()
response = ""
for partial_response in response_generator(response_text):
response += partial_response
response_placeholder.markdown(response) # Update the displayed response
response_placeholder.markdown(response_text) # Final display of complete response
# Add AI response to the chat history
ai_message = AIMessage(content=response_text)
chat_history.add_message(ai_message)
# Generate and set the chat title based on the first user message (if it's a new chat)
if st.session_state.chat_titles[st.session_state.current_chat] == "New Chat":
title_response = model.invoke([
SystemMessage(content="Generate a short and relevant title for a conversation based on the given message. JUST GIVE ONE OR TWO RELEVANT WORDS"),
HumanMessage(content=prompt_text)
])
generated_title = title_response.content.strip() if title_response else "Chat"
st.session_state.chat_titles[st.session_state.current_chat] = generated_title # Set generated title
st.rerun() # Refresh UI to display the new title
Step 5: Launch Your Chatbot 🚀
Congratulations! Your chatbot is ready to roll. 🎉 Let’s bring it to life:
- Run the Application:
Fire up your terminal, activate your virtual environment, and type this magical command:
streamlit run app.py
(Replace app.py
with your file’s name if you chose something fancier! 🧐)
- Watch It in Action:
Your browser will open, and just like that, your chatbot will be live, ready to showcase its conversational skills. Test it out and enjoy the results of your hard work!
And That’s a Wrap! 🎉
So, in this fun little project, we’ve set up a multi-chat AI assistant that can juggle multiple conversations, animate responses, and even come up with chat titles (because who doesn't love a good title? 😎). But let’s be real – storing all that chat history locally? Not the best idea if we want to scale. It’s like trying to run a marathon in flip-flops 🩴. We can’t manage multiple users or massive memory with this setup.
What’s Next ? 🚀
Next up, we’re taking this to the next level 🚀 by moving chat history to external storage. This will make our chatbot smarter, more scalable, and ready to handle the big leagues (aka, real-world scenarios 🌍). So stay tuned for the next upgrade!
Got Ideas? 💡
Have suggestions or feedback for us? Or maybe you’ve got a cool feature idea for the chatbot? Let us know in the comments or drop me a message—I’d love to hear from you! 😊