What Is Function Calling in AI?
Function calling (also called tool use) allows a large language model to invoke external functions, APIs, or tools as part of generating a response.
How It Works
- You define available functions with their parameters (name, description, input schema)
- The model decides when a function call would help answer the user’s question
- The model outputs a structured function call (not free text)
- Your application executes the function and returns the result
- The model incorporates the result into its final response
Example
User: “What’s the weather in Tokyo?”
Instead of guessing, the model calls:
{
"function": "get_weather",
"arguments": { "city": "Tokyo" }
}
Your app runs the function, returns {"temp": 22, "condition": "sunny"}, and the model responds: “It’s currently 22C and sunny in Tokyo.”
Function Calling vs. MCP
Function calling is the mechanism — the model’s ability to output structured tool invocations. MCP is a protocol that standardizes how tools are defined and connected across different AI clients.
Supported Models
| Model | Function Calling |
|---|---|
| GPT-4o | Yes (native) |
| Claude 3.5 | Yes (tool use) |
| Gemini Pro | Yes |
| Llama 3 | Yes (with tooling) |
| Mistral Large | Yes |
Function Calling in Elvean
Elvean supports agentic tool calling across all compatible models — web search, image search, charts, maps, and any tools exposed via MCP servers.
Elvean brings all these concepts together in one native Mac app — local models, cloud APIs, agentic tools, and more.
Learn more about Elvean