- MakeMeExpert
- Posts
- Inside LangChain — The Core Parts You’ll Use
Inside LangChain — The Core Parts You’ll Use
A clear, practical guide to LangChain’s core parts — models, prompts, chains, agents, memory, retrievers, and observability — with small Python examples.
Why knowing the components helps
LangChain looks like a lot at first. But it’s really a set of building blocks: models, prompts, chains, agents, memory, retrievers/indexes, and tools for observing and deploying. Learn the parts and you’ll stop guessing where bugs hide or why costs spiked.
This post walks through each core component in plain language and shows small Python examples you can try.
Become the go-to AI expert in 30 days
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
Models: The thing that answers
Models (LLMs) are where the text comes from. LangChain wraps different providers so your code talks to a consistent interface whether you use OpenAI, Anthropic, a hosted model, or a local model. That makes swapping models simpler.
In Python you usually create an LLM object and feed prompts or run it inside chains. Here’s a tiny example showing structure (replace OpenAI with the provider you use):
from langchain import OpenAI
llm = OpenAI(model="gpt-4o-mini", temperature=0.2)
print(llm("Summarize: LangChain core components"))
Keep in mind model calls are the main cost driver — plan which model you use for which job.
Prompts and templates: Clear Instructions
Prompts tell the model what to do. Prompt templates let you keep instructions consistent and insert variables safely. That makes testing and team work easier.
Example Python snippet using a simple prompt template:
from langchain import PromptTemplate
template = PromptTemplate.from_template("Summarize the following text:{text} Be short.")
print(template.format(text="LangChain helps build LLM apps."))
Templates stop accidental changes and help when you need many similar prompts.
Chains: Small, testable steps
Chains are sequences of steps. You can run a model, transform the output, call another model, and so on. Use chains when tasks are predictable and you want to compose logic cleanly.
A quick chain example in Python:
from langchain import LLMChain
chain = LLMChain(llm=llm, prompt=template)
print(chain.run({"text":"LangChain core pieces explained simply."}))
Chains keep flows readable and make it easier to unit test each step.
Agents and Tools: Letting the model pick actions
Agents let the model choose tools at runtime — like calling search, your database, or a file writer. Tools are functions the agent can call. Use them when decisions are not strictly linear.
Agents add flexibility but also more things to monitor: tool failures, rate limits, and safety. Treat tools like external APIs that can fail.
Memory: Keeping relevant context
If you want the agent to “remember” past chats or facts, add memory. Simple memory keeps a short conversation history. Long-term memory can be a vector store or a database that stores summaries. Memory changes behavior — with it the agent can follow up on earlier details.
Think about privacy and cost when you store memory. Plan retention and who can access the data.
Indexes, Retrievers, and Vector Stores: Finding exact info
When your agent needs to answer from documents, you’ll use embeddings, a vector store, an index, and a retriever. LangChain supports many vector stores (Pinecone, Weaviate, Milvus, etc.), and you can plug them in with the same interface. Embeddings turn text into vectors; retrievers find the nearest pieces; models use those pieces to answer.
This is often the place that adds cost and latency — embedding many docs and querying remote DBs matters. Test with small datasets before you index everything.
Observability and Deployment: LangSmith & LangGraph
LangChain offers observability and deployment tools. LangSmith helps trace, debug, and evaluate runs. LangGraph gives you orchestration for long-running or stateful agents and an IDE to visualize agent behavior. These tools help you find bugs and understand what actually happened when the system runs.
Check the pricing page for the latest tiers and exact numbers. If you want a short starter repo, use the LangChain docs quickstart and modify the Python snippets to your provider. Iteratively add agents, tools, or LangGraph for orchestration only when you need durable or stateful behavior.

