Other Issues
1. Model Output Encoding Issues
Currently, some models are prone to encoding issues when parameters are not set. In such cases, you can try setting the parameters such as temperature
, top_k
, top_p
, and frequency_penalty
.
Modify the payload as follows, adjusting as needed for different languages:
2.Explanation of max_tokens
For the LLM models provided by the platform:
-
The model with a max_tokens limit of
16384
:- deepseek-ai/DeepSeek-R1
- Qwen/QVQ-72B-Preview
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
-
The model with a max_tokens limit of
8192
:- Qwen/QwQ-32B-Preview
-
The model with a max_tokens limit of
4096
:- Other LLM models aside from those mentioned above
If you have special requirements, please send an email to contact@siliconflow.com for feedback.
3.Explanation of context_length
The context_length varies for different LLM models. You can search for the specific model on the Models to view the model details.
4. Are There Any Time and Quality Requirements for Custom Voice Samples in the Voice Models
- For cosyvoice2, the custom voice sample must be less than 30 seconds.
To ensure the quality of the generated voice, it is recommended that users upload a voice sample that is 8 to 10 seconds long, with clear pronunciation and no background noise or interference.
5. Output Truncation Issues in Model Inference
Here are several aspects to troubleshoot the issue:
- When encountering output truncation through API requests:
- Max Tokens Setting: Set the max_token to an appropriate value. If the output exceeds the max_token, it will be truncated. For the deepseek R1 series, the max_token can be set up to 16,384.
- Stream Request Setting: In non-stream requests, long output content is prone to 504 timeout issues.
- Client Timeout Setting: Increase the client timeout to prevent truncation before the output is fully completed.
- When encountering output truncation through third-party client requests:
- CherryStdio has a default max_tokens of 4,096. Users can enable the “Enable Message Length Limit” switch to set the max_token to an appropriate value.
6. Troubleshooting 429 Error During Model Usage
Here are some areas to check for the issue:
- General Users: Verify your user tier and the corresponding Rate Limits (rate limits) for the model. If the request exceeds the Rate Limits, consider retrying after some time.
- Dedicated Instance Users: Dedicated instances typically do not have Rate Limits. If a 429 error occurs, first confirm whether the correct model name for the dedicated instance is being called, and check if the api_key used matches the dedicated instance.
7. Account Balance Insufficient Despite Successful Recharge
Here are some areas to check for the issue:
- Ensure the api_key being used matches the account that was just recharged.
- If the api_key is correct, there may be a network delay during the recharge process. Consider waiting a few minutes and then retry.