Tool Calling

Connect Dynamo to external tools and services using function calling
View as Markdown

You can connect Dynamo to external tools and services using function calling (also known as tool calling). By providing a list of available functions, Dynamo can choose to output function arguments for the relevant function(s) which you can execute to augment the prompt with relevant external information.

Tool calling (AKA function calling) is controlled using the tool_choice and tools request parameters.

This page covers parser names for the default Dynamo-native path. For a comparison of all preprocessing options (including vLLM/SGLang chat-processor swap and tokenizer delegation) and routing compatibility, see Chat Processor Options.

Prerequisites

To enable this feature, you should set the following flag while launching the backend worker

  • --dyn-tool-call-parser: select the tool call parser from the supported list below
$# <backend> can be sglang, trtllm, vllm, etc. based on your installation
$python -m dynamo.<backend> --help

If no tool call parser is provided by the user, Dynamo will try to use default tool call parsing based on <TOOLCALL> and <|python_tag|> tool tags.

If your model’s default chat template doesn’t support tool calling, but the model itself does, you can specify a custom chat template per worker with python -m dynamo.<backend> --custom-jinja-template </path/to/template.jinja>.

If your model also emits reasoning content that should be separated from normal output, see Reasoning for the supported --dyn-reasoning-parser values.

Supported Tool Call Parsers

The table below lists the currently supported tool call parsers in Dynamo’s registry. The Upstream name column shows where the vLLM or SGLang parser name differs from Dynamo’s — relevant when using --dyn-chat-processor vllm or sglang (see Chat Processor Options). A blank upstream column means the same name works everywhere. Dynamo-only means no upstream parser exists for this format.

Parser NameModelsUpstream nameNotes
deepseek_v3DeepSeek V3, DeepSeek R1-0528+SGLang: deepseekv3Special Unicode markers
deepseek_v3_1DeepSeek V3.1Dynamo-onlyJSON separators
deepseek_v3_2DeepSeek V3.2+Dynamo-onlyDSML tags (<|DSML|function_calls>...)
default(fallback)Dynamo-onlyEmpty JSON config (no start/end tokens). Prefer a model-specific parser for production use.
glm47GLM-4.5, GLM-4.7Dynamo-onlyXML <arg_key>/<arg_value>
harmonygpt-oss-20b / -120bDynamo-onlyHarmony channel format
hermesQwen2.5-*, QwQ-32B, Qwen3-Instruct, Qwen3-Think, NousHermes-2/3vLLM: qwen2_5; SGLang: qwen25 (for Qwen models)<tool_call> JSON
jambaJamba 1.5 / 1.6 / 1.7Dynamo-only<tool_calls> JSON
kimi_k2Kimi K2 Instruct/Thinking, Kimi K2.5Pair with --dyn-reasoning-parser kimi or kimi_k25
llama3_jsonLlama 3 / 3.1 / 3.2 / 3.3 Instruct<|python_tag|> tool syntax
minimax_m2MiniMax M2 / M2.1vLLM: minimaxXML <minimax:tool_call>
mistralMistral / Mixtral / Mistral-Nemo, Magistral[TOOL_CALLS]...[/TOOL_CALLS]
nemotron_deciNemotron-Super / -Ultra / -Deci, Llama-Nemotron-Ultra / -SuperDynamo-only<TOOLCALL> JSON
nemotron_nanoNemotron-NanoDynamo-onlyAlias for qwen3_coder
phi4Phi-4, Phi-4-mini, Phi-4-mini-reasoningvLLM: phi4_mini_jsonfunctools[...] JSON
pythonicLlama 4 (Scout / Maverick)Python-list tool syntax
qwen3_coderQwen3-CoderXML <tool_call><function=...>

For Kimi K2.5 thinking models, pair --dyn-tool-call-parser kimi_k2 with --dyn-reasoning-parser kimi_k25 from Reasoning so that both <think> blocks and tool calls are parsed correctly from the same response.

Examples

Launch Dynamo Frontend and Backend

$# launch backend worker
$python -m dynamo.vllm --model openai/gpt-oss-20b --dyn-tool-call-parser harmony
$
$# launch frontend worker
$python -m dynamo.frontend

Tool Calling Request Examples

  • Example 1
1from openai import OpenAI
2import json
3
4client = OpenAI(base_url="http://localhost:8081/v1", api_key="dummy")
5
6def get_weather(location: str, unit: str):
7 return f"Getting the weather for {location} in {unit}..."
8tool_functions = {"get_weather": get_weather}
9
10tools = [{
11 "type": "function",
12 "function": {
13 "name": "get_weather",
14 "description": "Get the current weather in a given location",
15 "parameters": {
16 "type": "object",
17 "properties": {
18 "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
19 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
20 },
21 "required": ["location", "unit"]
22 }
23 }
24}]
25
26response = client.chat.completions.create(
27 model="openai/gpt-oss-20b",
28 messages=[{"role": "user", "content": "What's the weather like in San Francisco in Celsius?"}],
29 tools=tools,
30 tool_choice="auto",
31 max_tokens=10000
32)
33print(f"{response}")
34tool_call = response.choices[0].message.tool_calls[0].function
35print(f"Function called: {tool_call.name}")
36print(f"Arguments: {tool_call.arguments}")
37print(f"Result: {tool_functions[tool_call.name](**json.loads(tool_call.arguments))}")
  • Example 2
1# Use tools defined in example 1
2
3time_tool = {
4 "type": "function",
5 "function": {
6 "name": "get_current_time_nyc",
7 "description": "Get the current time in NYC.",
8 "parameters": {}
9 }
10}
11
12
13tools.append(time_tool)
14
15messages = [
16 {"role": "user", "content": "What's the current time in New York?"}
17]
18
19
20response = client.chat.completions.create(
21 model="openai/gpt-oss-20b", #client.models.list().data[1].id,
22 messages=messages,
23 tools=tools,
24 tool_choice="auto",
25 max_tokens=100,
26)
27print(f"{response}")
28tool_call = response.choices[0].message.tool_calls[0].function
29print(f"Function called: {tool_call.name}")
30print(f"Arguments: {tool_call.arguments}")
  • Example 3
1tools = [
2 {
3 "type": "function",
4 "function": {
5 "name": "get_tourist_attractions",
6 "description": "Get a list of top tourist attractions for a given city.",
7 "parameters": {
8 "type": "object",
9 "properties": {
10 "city": {
11 "type": "string",
12 "description": "The name of the city to find attractions for.",
13 }
14 },
15 "required": ["city"],
16 },
17 },
18 },
19]
20
21def get_messages():
22 return [
23 {
24 "role": "user",
25 "content": (
26 "I'm planning a trip to Tokyo next week. what are some top tourist attractions in Tokyo? "
27 ),
28 },
29 ]
30
31
32messages = get_messages()
33
34response = client.chat.completions.create(
35 model="openai/gpt-oss-20b",
36 messages=messages,
37 tools=tools,
38 tool_choice="auto",
39 max_tokens=100,
40)
41print(f"{response}")
42tool_call = response.choices[0].message.tool_calls[0].function
43print(f"Function called: {tool_call.name}")
44print(f"Arguments: {tool_call.arguments}")