What is Function Calling?
Function calling (also called tool use) is the ability for LLMs to invoke external functions or APIs based on user requests. Instead of just generating text, the LLM can decide to call a function, receive the result, and use that information in its response.
For example, when you ask "What's the weather in Tokyo?", the LLM can:
- Recognize this requires real-time data
- Call a weather API function
- Use the result to give you an accurate answer
Why Does Function Calling Exist?
LLMs have significant limitations:
- No real-time data: They can't access current information (weather, stock prices, news)
- No actions: They can't send emails, book appointments, or modify databases
- No external systems: They can't access your company's data or internal tools
- Knowledge cutoff: Their training data has a cutoff date
Function Calling Bridges the Gap
It transforms LLMs from text generators into intelligent agents that can interact with the real world through APIs and functions you define.
How Function Calling Works
The process follows a specific flow:
- Define tools: You describe available functions (name, description, parameters)
- User makes request: "What's the weather in Tokyo?"
- LLM decides: The model determines if a function should be called
- Returns function call: Model outputs the function name and arguments (not actual text)
- You execute: Your code runs the actual function
- Return result: Send the function output back to the LLM
- Final response: LLM generates a response using the function result
# The flow visualized:
User: "What's the weather in Tokyo?"
↓
LLM: "I should call get_weather(location='Tokyo')"
↓
Your code: calls actual weather API → returns {"temp": 22, "condition": "sunny"}
↓
LLM: "The weather in Tokyo is currently 22°C and sunny."
Function Calling with OpenAI
Here's a complete example:
Step 1: Define Your Functions
import json
from openai import OpenAI
client = OpenAI()
# Define the tools (functions) the model can use
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., 'Tokyo' or 'New York'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
}
]
Step 2: Implement the Actual Function
def get_weather(location: str, unit: str = "celsius") -> dict:
"""This would call a real weather API in production"""
# Simulated response
return {
"location": location,
"temperature": 22,
"unit": unit,
"condition": "sunny"
}
Step 3: Make the API Call
messages = [
{"role": "user", "content": "What's the weather like in Tokyo?"}
]
response = client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=tools,
tool_choice="auto" # Let model decide when to use tools
)
# Check if model wants to call a function
message = response.choices[0].message
if message.tool_calls:
# Model wants to call a function
tool_call = message.tool_calls[0]
function_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)
print(f"Calling: {function_name}({arguments})")
# Execute the function
if function_name == "get_weather":
result = get_weather(**arguments)
# Send result back to model
messages.append(message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result)
})
# Get final response
final_response = client.chat.completions.create(
model="gpt-4",
messages=messages
)
print(final_response.choices[0].message.content)
# "The weather in Tokyo is currently 22°C and sunny."
Function Calling with Anthropic (Claude)
from anthropic import Anthropic
client = Anthropic()
tools = [
{
"name": "get_weather",
"description": "Get the current weather in a location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
]
response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}]
)
# Handle tool use similar to OpenAI
if response.stop_reason == "tool_use":
tool_use = next(block for block in response.content if block.type == "tool_use")
# Execute function and continue conversation...
Using LangChain for Function Calling
LangChain simplifies tool creation and usage:
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.prompts import ChatPromptTemplate
# Define tools using decorator
@tool
def get_weather(location: str) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is 22°C and sunny."
@tool
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Search results for: {query}"
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a recipient."""
return f"Email sent to {to} with subject: {subject}"
# Create agent
llm = ChatOpenAI(model="gpt-4")
tools = [get_weather, search_web, send_email]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Use the agent
result = executor.invoke({
"input": "What's the weather in Tokyo and send an email about it to john@example.com"
})
print(result["output"])
Common Use Cases
API Integration
Weather, stocks, news, or any external API your application needs.
Database Queries
Let users query databases using natural language.
Code Execution
Run calculations, data analysis, or code snippets.
CRM Operations
Look up customers, create tickets, update records.
Calendar & Email
Schedule meetings, send emails, manage tasks.
Search & Retrieval
Search documents, knowledge bases, or the web.
Parallel Function Calling
Modern LLMs can request multiple function calls at once:
# User: "What's the weather in Tokyo and New York?"
# Model returns TWO tool calls:
tool_calls = [
{"name": "get_weather", "arguments": {"location": "Tokyo"}},
{"name": "get_weather", "arguments": {"location": "New York"}}
]
# Execute both in parallel for efficiency
import asyncio
async def execute_parallel(tool_calls):
tasks = [execute_tool(tc) for tc in tool_calls]
return await asyncio.gather(*tasks)
Best Practices
- Clear descriptions: Write detailed function descriptions - the LLM uses these to decide when to call
- Validate inputs: Never trust LLM-generated parameters blindly; validate before executing
- Handle errors: Functions can fail; return clear error messages the LLM can interpret
- Limit scope: Only expose functions that are safe and necessary
- Use enums: Constrain parameters to valid values when possible
- Timeout protection: Set timeouts on function execution
- Logging: Log all function calls for debugging and monitoring
Security Considerations
Important: Function Calling = Code Execution
When you enable function calling, you're letting the LLM trigger code execution. Treat this with appropriate caution.
- Sanitize inputs: Prevent injection attacks in function arguments
- Principle of least privilege: Functions should have minimal permissions
- Rate limiting: Prevent abuse through excessive function calls
- Confirmation for sensitive actions: Require user confirmation for destructive operations
- Audit trail: Log who triggered what functions and when
Function Calling vs. Agents
| Aspect | Function Calling | Agents |
|---|---|---|
| Complexity | Single function calls | Multi-step reasoning and planning |
| Control | You control the loop | Agent controls the loop |
| Use case | Simple tool use | Complex, autonomous tasks |
| Predictability | More predictable | Less predictable, more flexible |
Start with function calling for simpler use cases; graduate to agents when you need autonomous multi-step reasoning.
Master Function Calling with Expert Guidance
Our Agentic AI program covers function calling, tool creation, and building sophisticated AI agents. Learn with hands-on projects and personalized mentorship.
Explore Agentic AI Program