A Beginner guide to start with Langchain

A Beginner guide to start with Langchain


I have been building LLM application since ChatGPT came into the market. While building LLM application I have faced a lot of challenges like

  1. How to manage prompt, to resolve this we have to build our own Prompt Templates,
  2. With the help of Prompt Engineering techniques we have to make sure we will get the output in the desired format.
  3. Chain of prompts is on of the biggest challenge of all, etc.

Langchain solves all of these problems and creating LLM application is never been easy since then, and not only this, it gives a whole lot of other tools like FewShot Learning (where you will give some examples inside prompt) and with the help of Langchain FewShot Prompt Templates, you dont have to give those examples again and again, not only this it gives the functionality to remember the context with memory and whole lot of other stuff.

What is Langchain?

In simple words, Langchain is a open source framework that helps you to build applications powered by Large Language Models without having to handle every nitty-gritty details manually.

  • Chain multiple LLM calls together
  • Easily manage prompts (including dynamic formatting)
  • Keep track of context with build in memory tools
  • Integrate with different LLM providers like OpenAI, Anthropic, Gemini, etc

Its a game changer, as in the past you have to manage the API calls to the LLM’s and now langchain handles them for us and that too in a uniform fashion so if in future you want to change the LLM you just have to change the declaration of LLM keeping the whole application code intact.


Now lets see some examples of Langchain implementation that will help you start writing code with Langchain

Example 1: Create a simple LLM call using Langchain

We’ll create a basic code where we will initialize the OpenAI model using langchain and invoke it.

# Import the necessary libraries
import os

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI

# Load and set the envrionment variables
load_dotenv()
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY

# Initialize the OpenAI model
llm = ChatOpenAI(model="gpt-4o-mini")

# Invoke the model and print the response
llm_response = llm.invoke("Tell me a joke")
print(llm_response)

### OUTPUT
"""
content='Why did the scarecrow win an award? \n\nBecause he was outstanding in his field!' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 11, 'total_tokens': 30, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None} id='run-653d047e-cf59-4770-858d-1a3fa9cef1a5-0' usage_metadata={'input_tokens': 11, 'output_tokens': 19, 'total_tokens': 30, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}
"""        

I think it's pretty straight forward what we are doing here, also comments are there to help you understand the code.

The output of llm here is a type of AIMessage which is a standardized format provided by langchain, so that your code can work with it.

Example 2: Convert the LLM response into a string

In the example 1, we call the LLM and get the response in AIMessage format that langchain provides to use, but to use it further and to show the user we need to have it in string format.

In this example we exactly going to that using the Langchain OutputParser

# Import the necessary libraries
import os

from dotenv import load_dotenv
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI

# Load and set the envrionment variables
load_dotenv()
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY

# Initialize the OpenAI model and String Output Parser
llm = ChatOpenAI(model="gpt-4o-mini")
output_parser = StrOutputParser()

# Our first langchain chain
chain = llm | output_parser

# Invoke the model and print the response
llm_response = chain.invoke("Tell me a joke")
print(llm_response)

### OUTPUT
"""
Why did the scarecrow win an award? 

Because he was outstanding in his field!
"""        

This time we received the output in string format, and also this is the first time we actually create a chain that first calls the LLM and takes the output and convert it to string.

Article content
LangChain Chain

Example 3: Convert the LLM response into a structured output

When developing LLM applications, one of the biggest challenges is obtaining responses in a consistently structured format. Previouly, we use prompt engineering to define the desired output structure in the prompt, but as we all know, LLMs can sometimes “hallucinate,” resulting in responses that don’t match our expectations.

LangChain addresses this problem by integrating Pydantic models. By defining the expected response structure using a Pydantic class and passing that class to the LangChain framework, we can ensure that all LLM responses will be in a specified format.

# Import the necessary libraries
import os
from typing import List

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field

# Load and set the envrionment variables
load_dotenv()
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY

# Define the Pydantic Class (LLM response structure)
class MobileReview(BaseModel):
    phone_model: str = Field(description="Name and model of the phone")
    rating: float = Field(description="Overall rating out of 5")
    pros: List[str] = Field(description="List of positive aspects")
    cons: List[str] = Field(description="List of negative aspects")
    summary: str = Field(description="Brief summary of the review")

# Prompt for the model
review_text: str = """
    Just got my hands on the new Galaxy S21 and wow, this thing is slick! The screen is gorgeous,
    colors pop like crazy. Camera's insane too, especially at night - my Insta game's never been
    stronger. Battery life's solid, lasts me all day no problem.
    Not gonna lie though, it's pretty pricey. And what's with ditching the charger? C'mon Samsung.
    Also, still getting used to the new button layout, keep hitting Bixby by mistake.
    Overall, I'd say it's a solid 4 out of 5. Great phone, but a few annoying quirks keep it from
    being perfect. If you're due for an upgrade, definitely worth checking out!
"""

# Initialize the OpenAI model and String Output Parser
llm = ChatOpenAI(model="gpt-4o-mini")

# Providing the Pydantic model to the LLM
structured_llm = llm.with_structured_output(MobileReview)

# Invoke the model and print the response
llm_response = structured_llm.invoke(review_text)
print(f"Output: {output}")
print(f"Output.pros: {output.pros}")

### OUTPUT
"""
Output: phone_model='Samsung Galaxy S21' rating=4.0 pros=['Gorgeous screen with vibrant colors', 'Excellent camera performance, especially in low light', 'Solid battery life, lasts all day'] cons=['Pretty pricey', 'No charger included in the box', 'New button layout takes time to get used to'] summary='The Galaxy S21 is a sleek and powerful smartphone with an amazing display and impressive camera capabilities, making it a great upgrade choice despite a few minor drawbacks.'
Output.pros: ['Gorgeous screen with vibrant colors', 'Excellent camera performance, especially in low light', 'Solid battery life, lasts all day']
"""        

If you have any questions or need clarification, feel free to leave a comment on this blog or reach out to me on

Topmate: https://topmate.io/yash0307jain

You can read more blogs on Medium

Medium: https://medium.com/@yash0307jain

Thanks for reading, and I’ll see you next time!

To view or add a comment, sign in

More articles by YASH JAIN

Others also viewed

Explore topics