large language
GLM 4.6

Frontier open model with advanced agentic, reasoning and coding capabilities by Z AI
Model details
Example usage
GLM 4.6 is a SOTA 355 billion parameter MoE LLM with excellent coding and agentic capabilities. Baseten offers Dedicated Deployments and Model APIs for GLM 4.6 powered by the Baseten Inference Stack.
✕

Deployments of GLM 4.6 are OpenAI-compatible.
Input
1# You can use this model with any of the OpenAI clients in any language!
2# Simply change the API Key to get started
3
4from openai import OpenAI
5
6client = OpenAI(
7 api_key="YOUR_API_KEY",
8 base_url="https://inference.baseten.co/v1"
9)
10
11response = client.chat.completions.create(
12 model="zai-org/GLM-4.6",
13 messages=[
14 {
15 "role": "user",
16 "content": "Implement Hello World in Python"
17 }
18 ],
19 stream=True,
20 stream_options={
21 "include_usage": True,
22 "continuous_usage_stats": True
23 },
24 top_p=1,
25 max_tokens=1000,
26 temperature=1,
27 presence_penalty=0,
28 frequency_penalty=0
29)
30
31for chunk in response:
32 if chunk.choices and chunk.choices[0].delta.content is not None:
33 print(chunk.choices[0].delta.content, end="", flush=True)
JSON output
1{
2 "id": "143",
3 "choices": [
4 {
5 "finish_reason": "stop",
6 "index": 0,
7 "logprobs": null,
8 "message": {
9 "content": "[Model output here]",
10 "role": "assistant",
11 "audio": null,
12 "function_call": null,
13 "tool_calls": null
14 }
15 }
16 ],
17 "created": 1741224586,
18 "model": "",
19 "object": "chat.completion",
20 "service_tier": null,
21 "system_fingerprint": null,
22 "usage": {
23 "completion_tokens": 145,
24 "prompt_tokens": 38,
25 "total_tokens": 183,
26 "completion_tokens_details": null,
27 "prompt_tokens_details": null
28 }
29}