POST
/
chat
/
completions
Chat Completions
curl --request POST \
  --url https://api.siliconflow.com/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "deepseek-ai/DeepSeek-V3",
  "messages": [
    {
      "role": "user",
      "content": "What opportunities and challenges will the Chinese large model industry face in 2025?"
    }
  ]
}'
{
  "id": "<string>",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "<string>",
        "reasoning_content": "<string>",
        "tool_calls": [
          {
            "id": "<string>",
            "type": "function",
            "function": {
              "name": "<string>",
              "arguments": "<string>"
            }
          }
        ]
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  },
  "created": 123,
  "model": "<string>",
  "object": "chat.completion"
}

Authorizations

Authorization
string
header
required

Use the following format for authentication: Bearer <your api key>

Body

application/json
model
enum<string>
default:deepseek-ai/DeepSeek-V3
required

Corresponding Model Name. To better enhance service quality, we will make periodic changes to the models provided by this service, including but not limited to model on/offlining and adjustments to model service capabilities. We will notify you of such changes through appropriate means such as announcements or message pushes where feasible.

Available options:
inclusionAI/Ling-mini-2.0,
ByteDance-Seed/Seed-OSS-36B-Instruct,
Qwen/Qwen3-30B-A3B-Instruct-2507,
Qwen/Qwen3-235B-A22B-Thinking-2507,
Qwen/Qwen3-235B-A22B-Instruct-2507,
baidu/ERNIE-4.5-300B-A47B,
moonshotai/Kimi-K2-Instruct-0905,
moonshotai/Kimi-K2-Instruct,
ascend-tribe/pangu-pro-moe,
tencent/Hunyuan-A13B-Instruct,
MiniMaxAI/MiniMax-M1-80k,
Qwen/QwQ-32B,
Qwen/Qwen2.5-14B-Instruct,
Qwen/Qwen2.5-32B-Instruct,
Qwen/Qwen2.5-72B-Instruct,
Qwen/Qwen2.5-72B-Instruct-128K,
Qwen/Qwen2.5-7B-Instruct,
Qwen/Qwen2.5-Coder-32B-Instruct,
Qwen/Qwen2.5-VL-32B-Instruct,
Qwen/Qwen2.5-VL-72B-Instruct,
Qwen/Qwen2.5-VL-7B-Instruct,
Qwen/Qwen3-14B,
Qwen/Qwen3-235B-A22B,
Qwen/Qwen3-235B-A22B-Instruct-2507,
Qwen/Qwen3-235B-A22B-Thinking-2507,
Qwen/Qwen3-30B-A3B,
Qwen/Qwen3-30B-A3B-Instruct-2507,
Qwen/Qwen3-30B-A3B-Thinking-2507,
Qwen/Qwen3-32B,
Qwen/Qwen3-8B,
Qwen/Qwen3-Coder-30B-A3B-Instruct,
Qwen/Qwen3-Coder-480B-A35B-Instruct,
THUDM/GLM-4-32B-0414,
THUDM/GLM-4-9B-0414,
THUDM/GLM-4.1V-9B-Thinking,
THUDM/GLM-Z1-32B-0414,
THUDM/GLM-Z1-9B-0414,
baidu/ERNIE-4.5-300B-A47B,
deepseek-ai/DeepSeek-R1,
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B,
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B,
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B,
deepseek-ai/DeepSeek-V3.1,
deepseek-ai/DeepSeek-V3,
deepseek-ai/deepseek-vl2,
meta-llama/Meta-Llama-3.1-8B-Instruct,
stepfun-ai/step3,
tencent/Hunyuan-A13B-Instruct,
zai-org/GLM-4.5,
zai-org/GLM-4.5-Air,
zai-org/GLM-4.5V
Example:

"deepseek-ai/DeepSeek-V3"

messages
object[]
required

A list of messages comprising the conversation so far.

Required array length: 1 - 10 elements
stream
boolean

If set, tokens are returned as Server-Sent Events as they are made available. Stream terminates with data: [DONE]

Example:

false

max_tokens
integer

The maximum number of tokens to generate. The max_tokens is equal to the context length. Since some model inference services are still being updated, please do not set max_tokens to the maximum value (context length) when making a request. It is recommended to reserve around 10k as space for input content

Required range: 1 <= x <= 16384
Example:

4096

enable_thinking
boolean

Switches between thinking and non-thinking modes. Default is True. This field supports the following models:

- Qwen/Qwen3-8B
- Qwen/Qwen3-14B
- Qwen/Qwen3-32B
- wen/Qwen3-30B-A3B
- Qwen/Qwen3-235B-A22B
- tencent/Hunyuan-A13B-Instruct
- zai-org/GLM-4.5V
- deepseek-ai/DeepSeek-V3.1

If you want to use the function call feature for deepseek-ai/DeepSeek-V3.1, you need to set enable_thinking to false.

Example:

false

thinking_budget
integer
default:4096

Maximum number of tokens for chain-of-thought output. This field applies to all Reasoning models.

Required range: 128 <= x <= 32768
Example:

4096

min_p
number

Dynamic filtering threshold that adapts based on token probabilities.This field only applies to Qwen3.

Required range: 0 <= x <= 1
Example:

0.05

stop

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Example:

null

temperature
number

Determines the degree of randomness in the response.

Example:

0.7

top_p
number
default:0.7

The top_p (nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.

Example:

0.7

top_k
number
Example:

50

frequency_penalty
number
Example:

0.5

n
integer

Number of generations to return

Example:

1

response_format
object

An object specifying the format that the model must output.

tools
object[]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

Response

200

id
string
choices
object[]
usage
object
created
integer
model
string
object
enum<string>
Available options:
chat.completion