Here’s an example of how to provide tools to Claude using the Messages API:
How tool use works
Claude supports two types of tools:-
Client tools: Tools that execute on your systems, which include:
- User-defined custom tools that you create and implement
- Anthropic-defined tools like computer use and text editor that require client implementation
- Server tools: Tools that execute on Anthropic’s servers, like the web search and web fetch tools. These tools must be specified in the API request but don’t require implementation on your part.
Anthropic-defined tools use versioned types (e.g.,
web_search_20250305
, text_editor_20250124
) to ensure compatibility across model versions.Client tools
Integrate client tools with Claude in these steps:1
Provide Claude with tools and a user prompt
- Define client tools with names, descriptions, and input schemas in your API request.
- Include a user prompt that might require these tools, e.g., “What’s the weather in San Francisco?”
2
Claude decides to use a tool
- Claude assesses if any tools can help with the user’s query.
- If yes, Claude constructs a properly formatted tool use request.
- For client tools, the API response has a
stop_reason
oftool_use
, signaling Claude’s intent.
3
Execute the tool and return results
- Extract the tool name and input from Claude’s request
- Execute the tool code on your system
- Return the results in a new
user
message containing atool_result
content block
4
Claude uses tool result to formulate a response
- Claude analyzes the tool results to craft its final response to the original user prompt.
Server tools
Server tools follow a different workflow:1
Provide Claude with tools and a user prompt
- Server tools, like web search and web fetch, have their own parameters.
- Include a user prompt that might require these tools, e.g., “Search for the latest news about AI” or “Analyze the content at this URL.”
2
Claude executes the server tool
- Claude assesses if a server tool can help with the user’s query.
- If yes, Claude executes the tool, and the results are automatically incorporated into Claude’s response.
3
Claude uses the server tool result to formulate a response
- Claude analyzes the server tool results to craft its final response to the original user prompt.
- No additional user interaction is needed for server tool execution.
Tool use examples
Here are a few code examples demonstrating various tool use patterns and techniques. For brevity’s sake, the tools are simple tools, and the tool descriptions are shorter than would be ideal to ensure best performance.Single tool example
Single tool example
JSON
get_weather
function with the provided input, and return the result in a new user
message:JSON
Parallel tool use
Parallel tool use
Claude can call multiple tools in parallel within a single response, which is useful for tasks that require multiple independent operations. When using parallel tools, all For comprehensive examples, test scripts, and best practices for implementing parallel tool calls, see the parallel tool use section in our implementation guide.
tool_use
blocks are included in a single assistant message, and all corresponding tool_result
blocks must be provided in the subsequent user message.Important: Tool results must be formatted correctly to avoid API errors and ensure Claude continues using parallel tools. See our implementation guide for detailed formatting requirements and complete code examples.
Multiple tool example
Multiple tool example
You can provide Claude with multiple tools to choose from in a single request. Here’s an example with both a In this case, Claude may either:
get_weather
and a get_time
tool, along with a user query that asks for both.- Use the tools sequentially (one at a time) — calling
get_weather
first, thenget_time
after receiving the weather result - Use parallel tool calls — outputting multiple
tool_use
blocks in a single response when the operations are independent
user
message, with each result in its own tool_result
block.Missing information
Missing information
If the user’s prompt doesn’t include enough information to fill all the required parameters for a tool, Claude Opus is much more likely to recognize that a parameter is missing and ask for it. Claude Sonnet may ask, especially when prompted to think before outputting a tool request. But it may also do its best to infer a reasonable value.For example, using the This behavior is not guaranteed, especially for more ambiguous prompts and for less intelligent models. If Claude Opus doesn’t have enough context to fill in the required parameters, it is far more likely respond with a clarifying question instead of making a tool call.
get_weather
tool above, if you ask Claude “What’s the weather?” without specifying a location, Claude, particularly Claude Sonnet, may make a guess about tools inputs:JSON
Sequential tools
Sequential tools
Some tasks may require calling multiple tools in sequence, using the output of one tool as the input to another. In such a case, Claude will call one tool at a time. If prompted to call the tools all at once, Claude is likely to guess parameters for tools further downstream if they are dependent on tool results for tools further upstream.Here’s an example of using a In this case, Claude would first call the
This example demonstrates how Claude can chain together multiple tool calls to answer a question that requires gathering data from different sources. The key steps are:
get_location
tool to get the user’s location, then passing that location to the get_weather
tool:get_location
tool to get the user’s location. After you return the location in a tool_result
, Claude would then call get_weather
with that location to get the final answer.The full conversation might look like:Role | Content |
---|---|
User | What’s the weather like where I am? |
Assistant | I’ll find your current location first, then check the weather there. [Tool use for get_location] |
User | [Tool result for get_location with matching id and result of San Francisco, CA] |
Assistant | [Tool use for get_weather with the following input]{ “location”: “San Francisco, CA”, “unit”: “fahrenheit” } |
User | [Tool result for get_weather with matching id and result of “59°F (15°C), mostly cloudy”] |
Assistant | Based on your current location in San Francisco, CA, the weather right now is 59°F (15°C) and mostly cloudy. It’s a fairly cool and overcast day in the city. You may want to bring a light jacket if you’re heading outside. |
- Claude first realizes it needs the user’s location to answer the weather question, so it calls the
get_location
tool. - The user (i.e. the client code) executes the actual
get_location
function and returns the result “San Francisco, CA” in atool_result
block. - With the location now known, Claude proceeds to call the
get_weather
tool, passing in “San Francisco, CA” as thelocation
parameter (as well as a guessedunit
parameter, asunit
is not a required parameter). - The user again executes the actual
get_weather
function with the provided arguments and returns the weather data in anothertool_result
block. - Finally, Claude incorporates the weather data into a natural language response to the original question.
Chain of thought tool use
Chain of thought tool use
By default, Claude Opus is prompted to think before it answers a tool use query to best determine whether a tool is necessary, which tool to use, and the appropriate parameters. Claude Sonnet and Claude Haiku are prompted to try to use tools as much as possible and are more likely to call an unnecessary tool or infer missing parameters. To prompt Sonnet or Haiku to better assess the user query before making tool calls, the following prompt can be used:Chain of thought prompt
Answer the user's request using relevant tools (if they are available). Before calling a tool, do some analysis. First, think about which of the provided tools is the relevant tool to answer the user's request. Second, go through each of the required parameters of the relevant tool and determine if the user has directly provided or given enough information to infer a value. When deciding if the parameter can be inferred, carefully consider all the context to see if it supports a specific value. If all of the required parameters are present or can be reasonably inferred, proceed with the tool call. BUT, if one of the values for a required parameter is missing, DO NOT invoke the function (not even with fillers for the missing params) and instead, ask the user to provide the missing parameters. DO NOT ask for more information on optional parameters if it is not provided.
JSON mode
JSON mode
You can use tools to get Claude produce JSON output that follows a schema, even if you don’t have any intention of running that output through a tool or function.When using tools in this way:
- You usually want to provide a single tool
- You should set
tool_choice
(see Forcing tool use) to instruct the model to explicitly use that tool - Remember that the model will pass the
input
to the tool, so the name of the tool and description should be from the model’s perspective.
record_summary
tool to describe an image following a particular format.Pricing
Tool use requests are priced based on:- The total number of input tokens sent to the model (including in the
tools
parameter) - The number of output tokens generated
- For server-side tools, additional usage-based pricing (e.g., web search charges per search performed)
- The
tools
parameter in API requests (tool names, descriptions, and schemas) tool_use
content blocks in API requests and responsestool_result
content blocks in API requests
tools
, we also automatically include a special system prompt for the model which enables tool use. The number of tool use tokens required for each model are listed below (excluding the additional tokens listed above). Note that the table assumes at least 1 tool is provided. If no tools
are provided, then a tool choice of none
uses 0 additional system prompt tokens.
Model | Tool choice | Tool use system prompt token count |
---|---|---|
Claude Opus 4.1 | auto , none any , tool | 346 tokens 313 tokens |
Claude Opus 4 | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 4 | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 3.7 | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 3.5 (Oct) (deprecated) | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 3.5 (June) (deprecated) | auto , none any , tool | 294 tokens 261 tokens |
Claude Haiku 3.5 | auto , none any , tool | 264 tokens 340 tokens |
Claude Opus 3 (deprecated) | auto , none any , tool | 530 tokens 281 tokens |
Claude Sonnet 3 | auto , none any , tool | 159 tokens 235 tokens |
Claude Haiku 3 | auto , none any , tool | 264 tokens 340 tokens |
usage
metrics.
Next Steps
Explore our repository of ready-to-implement tool use code examples in our cookbooks:Calculator Tool
Learn how to integrate a simple calculator tool with Claude for precise numerical computations.
Customer Service Agent
Build a responsive customer service bot that leverages client tools to
enhance support.
JSON Extractor
See how Claude and tool use can extract structured data from unstructured text.