Authorizations
Use the following format for authentication: Bearer <your api key>
Body
Corresponding Model Name. To better enhance service quality, we will make periodic changes to the models provided by this service, including but not limited to model on/offlining and adjustments to model service capabilities. We will notify you of such changes through appropriate means such as announcements or message pushes where feasible.
inclusionAI/Ling-mini-2.0
, ByteDance-Seed/Seed-OSS-36B-Instruct
, Qwen/Qwen3-30B-A3B-Instruct-2507
, Qwen/Qwen3-235B-A22B-Thinking-2507
, Qwen/Qwen3-235B-A22B-Instruct-2507
, baidu/ERNIE-4.5-300B-A47B
, moonshotai/Kimi-K2-Instruct-0905
, moonshotai/Kimi-K2-Instruct
, ascend-tribe/pangu-pro-moe
, tencent/Hunyuan-A13B-Instruct
, MiniMaxAI/MiniMax-M1-80k
, Qwen/QwQ-32B
, Qwen/Qwen2.5-14B-Instruct
, Qwen/Qwen2.5-32B-Instruct
, Qwen/Qwen2.5-72B-Instruct
, Qwen/Qwen2.5-72B-Instruct-128K
, Qwen/Qwen2.5-7B-Instruct
, Qwen/Qwen2.5-Coder-32B-Instruct
, Qwen/Qwen2.5-VL-32B-Instruct
, Qwen/Qwen2.5-VL-72B-Instruct
, Qwen/Qwen2.5-VL-7B-Instruct
, Qwen/Qwen3-14B
, Qwen/Qwen3-235B-A22B
, Qwen/Qwen3-235B-A22B-Instruct-2507
, Qwen/Qwen3-235B-A22B-Thinking-2507
, Qwen/Qwen3-30B-A3B
, Qwen/Qwen3-30B-A3B-Instruct-2507
, Qwen/Qwen3-30B-A3B-Thinking-2507
, Qwen/Qwen3-32B
, Qwen/Qwen3-8B
, Qwen/Qwen3-Coder-30B-A3B-Instruct
, Qwen/Qwen3-Coder-480B-A35B-Instruct
, THUDM/GLM-4-32B-0414
, THUDM/GLM-4-9B-0414
, THUDM/GLM-4.1V-9B-Thinking
, THUDM/GLM-Z1-32B-0414
, THUDM/GLM-Z1-9B-0414
, baidu/ERNIE-4.5-300B-A47B
, deepseek-ai/DeepSeek-R1
, deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
, deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
, deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
, deepseek-ai/DeepSeek-V3.1
, deepseek-ai/DeepSeek-V3
, deepseek-ai/deepseek-vl2
, meta-llama/Meta-Llama-3.1-8B-Instruct
, stepfun-ai/step3
, tencent/Hunyuan-A13B-Instruct
, zai-org/GLM-4.5
, zai-org/GLM-4.5-Air
, zai-org/GLM-4.5V
"deepseek-ai/DeepSeek-V3"
A list of messages comprising the conversation so far.
1 - 10
elementsIf set, tokens are returned as Server-Sent Events as they are made available. Stream terminates with data: [DONE]
false
The maximum number of tokens to generate. The max_tokens is equal to the context length. Since some model inference services are still being updated, please do not set max_tokens to the maximum value (context length) when making a request. It is recommended to reserve around 10k as space for input content
1 <= x <= 16384
4096
Switches between thinking and non-thinking modes. Default is True. This field supports the following models:
- Qwen/Qwen3-8B
- Qwen/Qwen3-14B
- Qwen/Qwen3-32B
- wen/Qwen3-30B-A3B
- Qwen/Qwen3-235B-A22B
- tencent/Hunyuan-A13B-Instruct
- zai-org/GLM-4.5V
- deepseek-ai/DeepSeek-V3.1
If you want to use the function call feature for deepseek-ai/DeepSeek-V3.1, you need to set enable_thinking to false.
false
Maximum number of tokens for chain-of-thought output. This field applies to all Reasoning models.
128 <= x <= 32768
4096
Dynamic filtering threshold that adapts based on token probabilities.This field only applies to Qwen3.
0 <= x <= 1
0.05
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
null
Determines the degree of randomness in the response.
0.7
The top_p
(nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.
0.7
50
0.5
Number of generations to return
1
An object specifying the format that the model must output.
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.