top of page

SNIPPETS LTD.

AI Agent That Manages Your Emails And Calendar | LangChain

  • Writer: Pavol Megela
    Pavol Megela
  • Apr 14
  • 5 min read
ree

In today’s fast-paced digital world, managing emails can be a time-consuming task. Whether it’s responding to routine inquiries, drafting follow-ups, or handling customer support requests, manually writing emails eats up valuable time. But what if you had an personal AI Agent that could draft email responses for you?


In this article, we’ll build an AI Email Responder using LangChain, a powerful framework for developing applications with large language models (LLMs). Our AI agent will:

  • Read user input

  • Generate a smart, context-aware email response

  • Adapt to different tones (formal, casual, professional)

  • Be extendable with memory for more personalized responses

  • Connect to your Calendar for extended knowledge

  • Send you generated response for your approval


Let’s dive in!


LangChain: The power behind AI Agents

Before we dive into building our AI-Agent, let’s take a moment to understand LangChain and why it’s the perfect tool for this task.


LangChain is an open source framework for building applications based on Large Language Models (LLMs) like OpenAI’s GPT, Anthropic’s Claude or Google’s Gemini. Instead of just calling an API to generate text, LangChain helps you build AI agents that will:

  • Chain multiple steps together (process an email and generate a response)

  • Use memory to remember past interactions

  • Integrate with tools like web search, databases, and APIs


Why LangChain?

For an email responder, we need more than just a simple text generator. Our AI assistant should:

  • Analyze the email content and generate contextually relevant replies

  • Maintain a consistent tone (formal for business, casual for friendly chats)

  • Be extendable with additional tools (retrieving past conversations, scheduling follow-ups)


Setting Up the Environment

Before we start building our AI-powered Email Responder, we need to set up our development environment. We’ll be using Jupyter Notebook for this tutorial. Both are interactive environments for running Python code, and you can use either based on your preference.


Install & Launch Jupyter Notebook / JupyterLab

To install Jupyter on your local machine, follow the official installation guide here.


Open your terminal (Mac/Linux) or command prompt (Windows) and run:

pip install --upgrade langchain langchain_community langchain_openai python-dotenv

Start Jupyter Notebook/Lab

Once the installation is complete, start Jupyter, see how in the official guide here.


Set Up OpenAI API Key

To use OpenAI’s GPT model, you need an API key. Here's how:

  1. Get your OpenAI API key from OpenAI’s platform

  2. Create a .env file in your project folder

  3. If you don't have tokens you will need to add some


Let's BUILD!

Now that our environment is set up, it’s time to build our AI-Agent using LangChain. Our AI agent will take an email message as input and generate an appropriate reply based on the content and desired tone.


Inside your Jupyter Notebook, copy and run the following code:

import os
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from dotenv import load_dotenv

load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")

print("API Key Loaded:", openai_api_key)

If you click run, and you see your API Key being printed to the screen, everything is correct and we can continue.


Now we create our first simple function for generating email response.

def generate_email_reply(email_text, tone="formal"):
    prompt_template = PromptTemplate(
        input_variables=["email_text", "tone"],
        template="You are an AI email assistant. Read the email below and generate a {tone} reply.\n\nEmail:\n{email_text}\n\nReply:"
    )

    # Initialize OpenAI's GPT model
    llm = ChatOpenAI(model_name="gpt-3.5-turbo", openai_api_key=openai_api_key)
    
    # Create a LangChain LLMChain
    chain = prompt_template | llm
    
    # Generate the response
    response = chain.invoke({"email_text": email_text, "tone": tone})

    return response

 Let’s go step by step to understand what happens in the function


  1. Creating a Prompt Template

prompt_template = PromptTemplate(
    input_variables=["email_text", "tone"],
    template="You are an AI email assistant. Read the email below and generate a {tone} reply.\n\nEmail:\n{email_text}\n\nReply:"
)

We're creating a Prompt Template, this instructs the AI on how to generate the email response.

input_variables=["email_text", "tone"] 

These are placeholders {email_text} and {tone} that get replaced when the function runs.


The template tells the AI:

  • It is an AI email assistant

  • It should read the email and generate a response

  • It should adjust the response tone based on the input


After processing the final prompt will look like this:

You are an AI email assistant. Read the email below and generate a casual reply.

Email:
Can we reschedule our meeting?

Reply:

  1. Initializing OpenAI’s GPT Model

llm = ChatOpenAI(model_name="gpt-3.5-turbo", openai_api_key=openai_api_key)

This initializes OpenAI’s GPT-3.5-turbo model, which will generate the email response. We will be using this model because it's cheap and fast.


We specify:

  • Model name → "gpt-3.5-turbo"

  • API Key → Required to access OpenAI’s API


3. Connecting the Prompt & AI Model

chain = prompt_template | llm

This is LangChain’s syntax for chaining operations. It connects the prompt_template to the llm using "|" (pipe operator).


This means:

  • The user’s email & tone go into the prompt

  • The AI model (GPT-3.5-turbo) then processes the prompt to generate a response


  1. Generating the Email Reply

response = chain.invoke({"email_text": email_text, "tone": tone})

What this does:

  • The function passes the email text & tone into the chain

  • The AI reads the email, applies the specified tone, and generates a response

  • .invoke() is a method for running AI operations


Let's test it!

Now that we have our AI email responder ready, let’s test it by providing a sample email and checking the AI-generated reply.


In this example, John wants to reschedule a meeting, and our AI should generate a polite response.

sample_email = """  
Subject: Meeting Reschedule  

Hi,  

I hope you're doing well. I wanted to check if we could reschedule our meeting to a later time this week. Let me know your availability.  

Best,  
John  
"""

Let’s call the function and print the AI-generated response:

reply = generate_email_reply(sample_email, tone="formal")

print("AI-Generated Email Reply:\n")
print(reply)

When you run the code, you should see an output similar to this:

AI-Generated Email Reply:

Subject: Re: Meeting Reschedule  

Dear John,  

Thank you for reaching out. I appreciate your request to reschedule our meeting to a later time this week. I am available to discuss alternative meeting times. Please let me know your availability preferences so we can coordinate a new meeting time that works for both of us.  

Looking forward to hearing from you soon.  

Sincerely,  
[Your Name]

So far, we’ve built a basic AI-powered email responder that can generate replies based on the given email and tone. However, this is just the beginning!




In the next part of this article, we will enhance the AI agent by giving him power to:

  • Read incoming emails - Retrieve unread emails from the user’s inbox

  • Generate AI-powered replies -Draft responses based on the email content and selected tone

  • Access the calendar - To check if the requested meeting time is available

  • Retrieving previous emails with the sender - To understand past conversations for better replies

  • Give to user for approval - User reviews the reply and can approve, edit, or deny it


Stay tuned as we take our AI Agent from prototype to production!





Comments


bottom of page