Vision
1. Usage Scenarios
Vision-Language Models (VLM) are large language models capable of processing both visual (image) and linguistic (text) input modalities. Based on VLMs, you can input images and text, and the model can simultaneously understand the content of the images and the context while following instructions to respond. For example:
- Visual Content Interpretation: The model can interpret and describe the information in an image, such as objects, text, spatial relationships, colors, and atmosphere.
- Multi-turn Conversations Combining Visual Content and Context.
- Partial Replacement of Traditional Machine Vision Models like OCR.
- Future Applications: With continuous improvements in model capabilities, VLMs can be applied to areas such as visual agents and robotics.
2. Usage Method
For VLM models, you can invoke the /chat/completions
API by constructing a message
containing either an image URL
or a base64-encoded image
. The detail
parameter can be used to control how the image is preprocessed.
2.1 Explanation of Image Detail Control Parameters
SiliconCloud provides three options for the detail
parameter: low
, high
, and auto
.
For currently supported models, if detail
is not specified or is set to high
, the model will use the high
(“high resolution”) mode. If set to low
or auto
, the model will use the low
(“low resolution”) mode.
2.2 Example Formats for message
Containing Images
2.2.1 Using Image URLs
2.2.2 Base64 Format
2.2.3 Multiple Images, Each in Either Format
DeepseekVL2
series models are suitable for handling short contexts. It is recommended to input no more than 2 images. If more than 2 images are provided, the model will automatically resize them to 384x384, and the specified detail
parameter will be ignored. 3. Supported Models
Currently supported VLM models:
- Qwen Series:
- Qwen/Qwen2-VL-72B-Instruct
- DeepseekVL2 Series:
- deepseek-ai/deepseek-vl2
4. Billing for Visual Input Content
For visual inputs like images, the model converts them into tokens, which are combined with textual information as part of the model’s output context. This means visual inputs are also billed. Different models use different methods for converting visual content, as outlined below.
4.1 Qwen Series
Rules:
Qwen
supports a maximum pixel area of3584 * 3584 = 12845056
and a minimum pixel area of56 * 56 = 3136
. Each image’s longer and shorter sides are first scaled to multiples of 28(h * 28) * (w * 28)
. If the dimensions fall outside the minimum and maximum pixel ranges, the image is proportionally resized to fit within the range.
- When
detail=low
, all images are resized to448 * 448
, consuming256 tokens
. - When
detail=high
, the image is proportionally scaled, with its dimensions rounded up to the nearest multiple of 28, then resized to fit within the pixel range(3136, 12845056)
, ensuring both dimensions are multiples of 28.
Examples:
- Images with dimensions
224 * 448
,1024 x 1024
, and3172 x 4096
consume256 tokens
whendetail=low
. - An image with dimensions
224 * 448
consumes(224/28) * (448/28) = 8 * 16 = 128 tokens
whendetail=high
. - An image with dimensions
1024 * 1024
is rounded to1036 * 1036
and consumes(1036/28) * (1036/28) = 1369 tokens
whendetail=high
. - An image with dimensions
3172 * 4096
is resized to3136 * 4060
and consumes(3136/28) * (4060/28) = 16240 tokens
whendetail=high
.
4.2 DeepseekVL2 Series
Rules:
For each image, DeepseekVL2
processes two parts: global_view
and local_view
. The global_view
resizes the original image to 384x384
, while the local_view
divides the image into blocks of 384x384
. Additional tokens are added between blocks to maintain continuity.
- When
detail=low
, all images are resized to384x384
. - When
detail=high
, images are resized to dimensions that are multiples of384
, ensuring1 <= h * w <= 9
.
-
The scaling dimensions
(h, w)
are chosen based on:- Both
h
andw
are integers, and1 <= h * w <= 9
. - The resized image’s pixel count is compared to the original image’s pixel count, minimizing the difference.
- Both
-
Token consumption is calculated as:
(h * w + 1) * 196 + (w + 1) * 14 + 1 tokens
.
Examples:
- Images with dimensions
224 x 448
,1024 x 1024
, and2048 x 4096
consume421 tokens
whendetail=low
. - An image with dimensions
384 x 768
consumes(1 * 2 + 1) * 196 + (2 + 1) * 14 + 1 = 631 tokens
whendetail=high
. - An image with dimensions
1024 x 1024
is resized to1152 x 1152
and consumes(3 * 3 + 1) * 196 + (3 + 1) * 14 + 1 = 2017 tokens
whendetail=high
. - An image with dimensions
2048 x 4096
is resized to768 x 1536
and consumes(2 * 4 + 1) * 196 + (4 + 1) * 14 + 1 = 1835 tokens
whendetail=high
.