Building a Simple Chatbot using LangChain and Gemini API: A Step-by-Step Guide

Artificial Intelligence (AI) chatbots are becoming an essential part of our digital world from answering customer queries to helping students learn more efficiently. But have you ever wondered how you can build your own chatbot using modern tools like LangChain and Google Gemini?
In this tutorial, we’ll build a simple but powerful chatbot that responds politely and clearly using LangChain’s ChatPromptTemplate and Gemini API.
Whether you’re a beginner or an AI enthusiast, by the end of this article, you’ll have a working chatbot and understand how each piece fits together.
What You’ll Learn
What LangChain and Gemini are
How to connect Gemini API using an environment file
How to use LangChain’s prompt templates and message schema
How to build a chatbot function
How to run a continuous chat loop in Python
What is LangChain?
LangChain is a powerful Python framework that makes it easier to build AI applications with Large Language Models (LLMs) like Gemini, OpenAI GPT, or Claude.
It provides tools for:
Prompt management (creating dynamic prompts)
Chaining tasks (combining multiple AI steps)
Memory and message handling
Integration with external data and APIs
Simply put, LangChain acts as the “orchestrator” that connects your ideas with the intelligence of models like Gemini.
What is Google Gemini?
Google Gemini is Google’s next-generation multimodal AI model (from Google DeepMind). It’s capable of understanding and generating text, images, and even reasoning about complex tasks.
For our chatbot, we’ll use Gemini 2.5 Flash, which is optimized for fast responses perfect for real-time chat experiences.
Setting Up Your Environment
Before writing code, let’s prepare the setup.
Step 1: Create a Project Folder
Create a folder for your chatbot project:
mkdir gemini_chatbot
cd gemini_chatbotStep 2: Create Virtual Environment and Activate
It’s good practice to use a virtual environment to manage dependencies.
python -m venv venv
source venv/bin/activate # Mac/Linux
venv\Scripts\activate # WindowsStep 3: Install Required Packages
Install all the dependencies needed for this project:
pip install langchain langchain-google-genai python-dotenv
These libraries will handle:
LangChain → for prompt handling and structured messages.
langchain-google-genai → Gemini model integration.
python-dotenv → reading environment variables securely.
Step 4: Create a .env File
Create a file named .env in your project root directory. This file stores your Google API key safely.
GOOGLE_API_KEY=your_api_key_hereNote: Never share your .env file publicly or upload it to GitHub.
Writing the Chatbot Code
Now, open your code editor (VS Code or PyCharm) and create a Python file named:
chatbot.py
Paste the following code:
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import HumanMessage
from dotenv import load_dote
import os
# Load API key from .env file
load_dotenv()
# Initialize the Gemini model
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
api_key=os.getenv("GOOGLE_API_KEY")
)
# Create a simple chat prompt template
prompt = ChatPromptTemplate.from_template(
"You are a helpful AI chatbot. Answer the user query politely and clearly.\n\nUser: {question}"
)
def Chatbot(user_input: str):
"""Generate chatbot response from Gemini model using LangChain."""
messages = prompt.format_messages(question=user_input)
response = llm.invoke(messages)
return response
if __name__ == "__main__":
print("Gemini Chatbot (type 'exit' to quit)\n")
while True:
print(" Human Message ".center(80, "="))
user_input = input()
if user_input.lower() in ["exit", "quit"]:
print(" Bot exit ".center(100, "*"))
break
chatbot_response = Chatbot(HumanMessage(content=user_input))
chatbot_response.pretty_print()
Code Explanation
Let’s break this down step-by-step.
1. Importing Required Libraries
We import everything we need:
ChatGoogleGenerativeAI → connects LangChain to Gemini.
ChatPromptTemplate → lets us design structured prompts.
HumanMessage → represents the user’s input in a consistent format.
dotenv → loads our API key securely.
2. Loading Environment Variables
load_dotenv()This reads the .env file and loads your GOOGLE_API_KEY into the environment.
3. Initializing the Gemini Model
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
api_key=os.getenv("GOOGLE_API_KEY")
)Here, we connect to Gemini 2.5 Flash, a fast, lightweight model suitable for chat.
4. Creating the Prompt Template
prompt = ChatPromptTemplate.from_template(
"You are a helpful AI chatbot. Answer the user query politely and clearly.\n\nUser: {question}"
)LangChain lets us use placeholders ({question}) so that every user input gets formatted properly before being sent to the model.
5. The Chatbot Function
def Chatbot(user_input: str):
messages = prompt.format_messages(question=user_input)
response = llm.invoke(messages)
return response
This function:
Formats the message with the user’s input.
Sends it to Gemini.
Returns the AI’s response.
6. The Chat Loop
if __name__ == "__main__":
...
This section keeps the chatbot running until you type “exit”.
Each time you enter text, it’s wrapped in a HumanMessage, formatted with the prompt, and sent to the Gemini model.
Testing the Chatbot
Now run your chatbot from the terminal:
python chatbot.py
You’ll see:
Gemini Chatbot (type 'exit' to quit)
================== Human Message ==================
Hello, who are you?
Gemini will reply politely, for example:
I’m an AI chatbot powered by Google Gemini. How can I help you today?
You can keep chatting or type exit to end the session.

How It Works Internally
Let’s understand the flow:
User Input → You type a message.
Prompt Formatting → LangChain inserts your input into the chat template.
Message Conversion → Message is converted into a format Gemini understands.
Model Inference → Gemini generates a text response.
Display Output → The chatbot prints the AI’s reply clearly.

This modular setup makes your code easier to expand later for example, adding memory, voice, or even retrieval-augmented generation (RAG).
Next Steps: Expanding the Chatbot
Now that you’ve built a simple chatbot, here are some cool things you can do next:
1. Add Conversation Memory
Use LangChain’s ConversationBufferMemory to make your chatbot remember previous messages.
2. Build a FastAPI Backend
Wrap your chatbot in a FastAPI endpoint to turn it into a web API.
3. Create a Web Interface
Use Streamlit or Gradio to make a visual chat interface.
4. Integrate with Databases
Connect to a PostgreSQL or vector store to make your bot more context-aware.
Why LangChain + Gemini?
Combining LangChain and Gemini gives you the best of both worlds:
LangChain → structure, modularity, and flexibility.
Gemini → cutting-edge reasoning, understanding, and speed.
Together, they make building intelligent chat systems both accessible and powerful.
Conclusion
You’ve just built a fully functional AI chatbot using LangChain and Google Gemini API from scratch!
This blog demonstrates how easy it is to combine frameworks and models to create intelligent, responsive chat systems.
As AI continues to evolve, tools like LangChain and Gemini will make it even easier to build agentic, context-aware applications for education, business, and beyond.
Join the conversation
Sign in to share your thoughts and engage with other readers.
No comments yet
Be the first to share your thoughts!