- MakeMeExpert
- Posts
- How Function Calling and MCP Power Real AI Agents
How Function Calling and MCP Power Real AI Agents
A clear, practical guide to using Function Calling and the Model Context Protocol (MCP) together. Learn how the two phases (decide vs execute) fit, what to build first, and budget ranges for cloud models and LLM-ops tools.
Why this matters
LLMs are great at turning words into helpful responses. But businesses need repeatable, auditable actions — things that plug into billing, calendars, or internal databases. The gap between a model’s text and real-world work is where integration matters.
Two patterns have emerged: Function Calling, which turns intent into a structured instruction, and the Model Context Protocol (MCP), which standardizes how tools are discovered and run. Used together, they let agents decide what to do and then execute it safely and reliably.
Join 400,000+ executives and professionals who trust The AI Report for daily, practical AI updates.
Built for business—not engineers—this newsletter delivers expert prompts, real-world use cases, and decision-ready insights.
No hype. No jargon. Just results.
Function Calling — the translator from chat to code
Function Calling is the simplest idea here. The developer gives the model a list of functions (often described with JSON schemas). When the model decides it needs a tool, it returns a structured function call (for example: get_current_stock_price(company: "AAPL")
). Your app actually runs the API call. That keeps the model focused on deciding, not executing. OpenAI’s guide explains this approach and examples.
That pattern is low friction. It’s great for single apps, low latency, or when you control both the model and the tool logic. The downside: each LLM provider formats things a bit differently, and you still must host and wire every tool into your app. That wiring is where maintenance and scaling work shows up.
MCP — a standard way to expose tools and data
MCP (Model Context Protocol) is an open standard for exposing tools, connectors, and data to LLM-based apps. Think of it like a USB port for AI: an app can ask an MCP server “what tools do you have?” and call them in a consistent way. That makes integrations portable and less tied to one provider. See the MCP docs for details and how to build servers and clients.
MCP usually runs client-server style: an MCP Server hosts tools, an MCP Client talks to that server, and the agent (the LLM app) uses the client to discover and invoke tools. Many MCP implementations use standard RPC formats like JSON-RPC 2.0 for messages, which keeps things simple and transport-agnostic.
How Function Calling and MCP fit together — two phases, one flow
Treat Function Calling as phase one: the model recognizes a need and returns a structured instruction. Phase two is MCP: a standard host executes that instruction, applies policy, handles auth, logs the call, and returns the result to the agent. Together they let you separate decision and execution cleanly. LangWatch and other LLM-ops tools emphasize observability across both phases so you can trace what happened.
That separation matters when you scale. Function Calling keeps the AI side simple and provider-friendly. MCP lets teams add, remove, or share tools across apps without rewriting agents every time. In practice you’ll still wire the two together — the agent uses Function Calling to pick a tool, and the MCP server actually runs it.
Real-world tradeoffs to plan for
If you start with Function Calling only, you get low overhead and fast iteration. But you’ll pay in manual wiring, duplicated connectors, and potential vendor-lock issues because each provider’s function format differs. That’s OK for prototypes or single-app use.
If you adopt MCP early, you reduce duplication and improve portability. But you add infrastructure: MCP servers, client adapters, auth layers, and monitoring. For enterprise workloads where security, policy enforcement, and audit trails matter, MCP’s extra complexity is worth it. The trick is balancing short-term speed with long-term maintainability.
A short checklist for teams building agentic systems
Start small: prototype with Function Calling to prove the workflow. Capture traces from day one.
Add an MCP server when you need portability, reuse, or stronger security controls. Document each tool’s schema and auth.
Put tracing and policy enforcement into the execution layer. Use an LLM-ops tool to log inputs, outputs, latencies, and errors so you can debug chains of calls.
Final thought
Function Calling answers the question “what should be done?” MCP answers “how do we run it, safely and consistently?” Use Function Calling to let the model pick and format work. Use MCP to host, secure, and observe the tools that do the work. Together they make AI agents that feel less like toys and more like reliable software components.