Blog

Blog, Learning Guide

Getting Started with LangChain: Building a Simple Context-Aware Chatbot

August 20, 2025

In our previous post, we explored LangChain, a powerful framework for developing applications with large language models (LLMs). We discussed its core components, operational workflows, and real-world applications. Now, it’s time to see LangChain in action. In this post, we’ll walk through building a basic, context-aware chatbot using LangChain’s modular tools. By the end of this tutorial, you’ll have a working chatbot and a foundation for further exploration. Let’s get started with WeCloudData the leading data and AI training academy.

The Theory Behind the Chatbot: Why Context Matters

Chatbots are a great way to understand LangChain because they show how it can combine LLMs with memory for context-aware conversations. Before we start coding, let’s cover some theory. Large language models (LLMs) like OpenAI’s GPT-3.5-turbo are great at generating text from prompts, but they have a flaw: they are “forgetful.” Each query is treated separately, like a goldfish starting over each time. This lack of memory works for one-time tasks but struggles in conversations, where context is essential. For example, asking “What’s its population?” needs to relate to a previously mentioned city.

This is where LangChain comes in, a framework that connects LLMs with tools like memory buffers to create applications that can remember information. In our demo, we’ll use:

  • ChatOpenAI: A tool for integrating LLMs that simplifies API calls, allowing you to switch models easily.
  • InMemory Chat Message History: A basic memory module that keeps track of conversation history, allowing the LLM to “remember” past exchanges.
  • Runnable With Message History: The connector that combines the model and memory, adding context to prompts automatically.

You can think of it like building a bicycle. The LLM acts as the engine (pedals), memory is the frame (holding everything together), and the runnable is the chain (transferring power). This combination changes a static generator into a lively conversationalist, showcasing LangChain’s main idea of modularity for scalable AI.

Prerequisites: Setting Up in Google Colab

  1. A Google account to access Colab.
  2. An OpenAI API key (grab one from platform.openai.com; it’s free to start).
  3. Basic familiarity with Python 

Steps to Run the LangChain Chatbot

1. Set Up a New Colab Notebook

  • Go to Google Colab.
  • Create a new notebook by clicking File > New Notebook.

2. Install Required Packages for Langchain Chatbot

The chatbot requires the langchain, langchain-openai, and langchain-community packages. 

Colab doesn’t have these pre-installed, so you’ll need to install them in a code cell. Run the following commands in a cell; these include additional packages for our chatbot.

installing required packages for langchain chatbot

3. Handle the OpenAI API Key

LangChain emphasizes security and flexibility. We use ‘getpass’ for secure API input, avoiding hardcoding keys.

handling the api key for langchain chatbot weclouddata

4. Model Initialization

This part is the brain of our chatbot. LLMs are based on probabilities. The temperature setting (0-1) controls the balance between creativity and predictability. At 0.7, responses are informative yet feel natural, like talking to a knowledgeable friend instead of a robotic encyclopedia.

initializing the model for langchain chatbot weclouddata

5. Memory Management

Memory in LangChain mimics how humans recall information by storing interactions as messages, whether from a human or AI. InMemoryChatMessageHistory is temporary, resetting when the runtime disconnects. This makes it great for demos, but it can scale to databases for production use.

memory management

6. The Conversation Runnable

Runnables are workflows in LangChain. They are sequences that process inputs. RunnableWithMessageHistory automates the process of adding context, turning stateless calls into conversational threads. This is the key feature: it combines the model and memory smoothly.

runable message history

This is the magic: it chains model + memory seamlessly.

7. User Input and Response Handling

In Colab, traditional input loops can be awkward. So, we use ipywidgets to create a text box and a “Send” button. When you type a message and click “Send,” the RunnableWithMessageHistory processes it as a HumanMessage. It retrieves the history and generates an AI-Message. Error handling ensures the system is reliable, and debug prints track the steps of execution, similar to a GPS guiding the chatbot’s path.

creatre widgets for langchain chatbot weclouddata
code to be used

The button interface is intuitive: type, click, and see the response. It’s like texting, but with an AI.

Running and Testing: See the Langchain Chatbot in Action

Let’s discuss what happens at the back of the chatbot. When you input a message, HumanMessage wraps it. The runnable fetches history, like a past mention of Canada. It injects this into the prompt implicitly and then calls the LLM. The response updates memory and closes the loop. This is retrieval-augmented generation (RAG) lite, with memory acting as the “retriever.” It reduces hallucinations by grounding responses in context, which is essential for reliable AI.

Compared to raw OpenAI calls, without LangChain, you’d have to manually combine history strings, which could lead to token overflows. 

running the model
taking in input

We’ve built a context-aware chatbot in Colab, blending LangChain’s theoretical pillars of modularity, memory, and runnables with practical code. It’s not just a demo; it’s proof that sophisticated AI is accessible, turning “forgetful” LLMs into reliable companions. Experiment, tweak, and share with your colleagues. If this sparked ideas, check LangChain’s docs for deeper dives. What’s your next chain? Happy coding! 

Learn with WeCloudData

At WeCloudData, we offer hands-on courses in Data Science, LLM, and AI to help learners and professionals like you stay ahead of the curve.

Whether you’re just starting or looking to level up your skills, our programs are designed to teach you the real-world techniques used in today’s AI-driven world, including emerging practices like context engineering.

Learn. Build. Lead with WeCloudData.

Ready to take your next step in AI?

Explore our courses →

SPEAK TO OUR ADVISOR
Join our programs and advance your career in AI Engineering

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.
Other blogs you might like
Consulting
Background Our client is a company manufacturing consumer electronic products like mobile devices, printers, computer monitors and so on,…
by Beam Data
October 19, 2021
Career Guide
It has been approximately one year since I decided to make a career switch from Civil Engineering to the…
by Student WeCloudData
February 10, 2020
Job Market, Learning Guide
Data science and Artificial Intelligence (AI) are transforming the marketing industry by empowering companies to provide highly customized customer…
by WeCloudData
March 28, 2025

Kick start your career transformation