Function calling lets LLMs invoke external functions or APIs based on conversation context. Instead of just generating text, the model can decide to call a weather API, search a database, or execute code — then use the results to formulate its response.

How Function Calling Works

You define available functions with names, descriptions, and parameter schemas. When the user asks 'What's the weather in Tokyo?', the model returns a structured function call: {name: 'get_weather', arguments: {city: 'Tokyo'}}. Your code executes the function and feeds the result back to the model.

Key Concepts

  • Tool Schema — JSON schema defining available functions — name, description, and typed parameters
  • Tool Choice — Whether the model must call a tool, can choose to, or should never call one — configurable per request
  • Parallel Tool Calls — Some models can invoke multiple functions simultaneously for efficiency

Learn Function Calling — Top Videos

Function Calling Educators

OpenAI
OpenAI

@openai

AI Coding

OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.

1.9M Subs
456 Videos
36.2K Avg Views
2.18% Engagement
View Profile →
Academind
Academind

@academind

AI Coding

There's always something to learn! We create courses and tutorials on tech-related topics since 2016! We teach develop...

929K Subs
752 Videos
17K Avg Views
2.39% Engagement
View Profile →

Frequently Asked Questions

Is function calling the same as MCP?

Function calling is the LLM capability. MCP is a protocol for organizing and serving tools to AI clients. MCP uses function calling under the hood, but adds discovery, authentication, and standardization.

Which LLMs support function calling?

GPT-4o, Claude 3+, Gemini, and most major LLMs support function calling. The exact implementation and reliability varies by model.

Want a structured learning path?

Plan a Function Calling Lesson →