Qwen: Qwen3.5-27B
Qwen: Qwen3.5-27B is a multimodal model for vision-language understanding. It combines multimodal input handling and image understanding with a 262K tokens context window and a low-cost profile. Use it for image and video understanding when latency, cost, and throughput matters. It is a practical choice for teams that need reliable output, flexible deployment, and room to scale.
Input
$0.20/1M
Output
$1.56/1M
Cached
$0.03/1M
Batch
$0.05/1M