docs(ollama): update onboarding flow

Co-Authored-By: Jeffrey Morgan <jmorganca@gmail.com>
(cherry picked from commit e8ca2ff4e522f2d971801a537b3c4fdfecde0711)
This commit is contained in:
Bruce MacDonald
2026-03-11 14:00:22 -07:00
committed by Peter Steinberger
parent 0068f55dd8
commit f906bf58db
5 changed files with 69 additions and 9 deletions

View File

@@ -337,7 +337,7 @@ Options:
- `--non-interactive`
- `--mode <local|remote>`
- `--flow <quickstart|advanced|manual>` (manual is an alias for advanced)
- `--auth-choice <setup-token|token|chutes|openai-codex|openai-api-key|openrouter-api-key|ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|mistral-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|opencode-go|custom-api-key|skip>`
- `--auth-choice <setup-token|token|chutes|openai-codex|openai-api-key|openrouter-api-key|ollama|ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|mistral-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|opencode-go|custom-api-key|skip>`
- `--token-provider <id>` (non-interactive; used with `--auth-choice token`)
- `--token <token>` (non-interactive; used with `--auth-choice token`)
- `--token-profile-id <id>` (non-interactive; default: `<provider>:manual`)
@@ -355,8 +355,8 @@ Options:
- `--minimax-api-key <key>`
- `--opencode-zen-api-key <key>`
- `--opencode-go-api-key <key>`
- `--custom-base-url <url>` (non-interactive; used with `--auth-choice custom-api-key`)
- `--custom-model-id <id>` (non-interactive; used with `--auth-choice custom-api-key`)
- `--custom-base-url <url>` (non-interactive; used with `--auth-choice custom-api-key` or `--auth-choice ollama`)
- `--custom-model-id <id>` (non-interactive; used with `--auth-choice custom-api-key` or `--auth-choice ollama`)
- `--custom-api-key <key>` (non-interactive; optional; used with `--auth-choice custom-api-key`; falls back to `CUSTOM_API_KEY` when omitted)
- `--custom-provider-id <id>` (non-interactive; optional custom provider id)
- `--custom-compatibility <openai|anthropic>` (non-interactive; optional; default `openai`)

View File

@@ -37,7 +37,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
- [Mistral](/providers/mistral)
- [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
- [NVIDIA](/providers/nvidia)
- [Ollama (local models)](/providers/ollama)
- [Ollama (cloud + local models)](/providers/ollama)
- [OpenAI (API + Codex)](/providers/openai)
- [OpenCode (Zen + Go)](/providers/opencode)
- [OpenRouter](/providers/openrouter)

View File

@@ -1,7 +1,7 @@
---
summary: "Run OpenClaw with Ollama (local LLM runtime)"
summary: "Run OpenClaw with Ollama (cloud and local models)"
read_when:
- You want to run OpenClaw with local models via Ollama
- You want to run OpenClaw with cloud or local models via Ollama
- You need Ollama setup and configuration guidance
title: "Ollama"
---
@@ -16,6 +16,42 @@ Ollama is a local LLM runtime that makes it easy to run open-source models on yo
## Quick start
### Onboarding wizard (recommended)
The fastest way to set up Ollama is through the onboarding wizard:
```bash
openclaw onboard
```
Select **Ollama** from the provider list. The wizard will:
1. Ask for the Ollama base URL where your instance can be reached (default `http://127.0.0.1:11434`).
2. Let you choose **Cloud + Local** (cloud models and local models) or **Local** (local models only).
3. Open a browser sign-in flow if you choose **Cloud + Local** and are not signed in to ollama.com.
4. Discover available models and suggest defaults.
5. Auto-pull the selected model if it is not available locally.
Non-interactive mode is also supported:
```bash
openclaw onboard --non-interactive \
--auth-choice ollama \
--accept-risk
```
Optionally specify a custom base URL or model:
```bash
openclaw onboard --non-interactive \
--auth-choice ollama \
--custom-base-url "http://ollama-host:11434" \
--custom-model-id "qwen3.5:27b" \
--accept-risk
```
### Manual setup
1. Install Ollama: [https://ollama.com/download](https://ollama.com/download)
2. Pull a local model if you want local inference:
@@ -28,7 +64,7 @@ ollama pull gpt-oss:20b
ollama pull llama3.3
```
3. If you want Ollama Cloud models too, sign in:
3. If you want cloud models too, sign in:
```bash
ollama signin
@@ -41,7 +77,7 @@ openclaw onboard
```
- `Local`: local models only
- `Cloud + Local`: local models plus Ollama Cloud models
- `Cloud + Local`: local models plus cloud models
- Cloud models such as `kimi-k2.5:cloud`, `minimax-m2.5:cloud`, and `glm-5:cloud` do **not** require a local `ollama pull`
OpenClaw currently suggests:
@@ -191,6 +227,14 @@ Once configured, all your Ollama models are available:
}
```
## Cloud models
Cloud models let you run cloud-hosted models (for example `kimi-k2.5:cloud`, `minimax-m2.5:cloud`, `glm-5:cloud`) alongside your local models.
To use cloud models, select **Cloud + Local** mode during onboarding. The wizard checks whether you are signed in and opens a browser sign-in flow when needed. If authentication cannot be verified, the wizard falls back to local model defaults.
You can also sign in directly at [ollama.com/signin](https://ollama.com/signin).
## Advanced
### Reasoning models

View File

@@ -134,6 +134,17 @@ openclaw onboard --non-interactive \
```
Swap to `--auth-choice opencode-go --opencode-go-api-key "$OPENCODE_API_KEY"` for the Go catalog.
</Accordion>
<Accordion title="Ollama example">
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice ollama \
--custom-model-id "qwen3.5:27b" \
--accept-risk \
--gateway-port 18789 \
--gateway-bind loopback
```
</Accordion>
<Accordion title="Custom provider example">
```bash
openclaw onboard --non-interactive \

View File

@@ -16,7 +16,7 @@ For the short guide, see [Onboarding Wizard (CLI)](/start/wizard).
Local mode (default) walks you through:
- Model and auth setup (OpenAI Code subscription OAuth, Anthropic API key or setup token, plus MiniMax, GLM, Moonshot, and AI Gateway options)
- Model and auth setup (OpenAI Code subscription OAuth, Anthropic API key or setup token, plus MiniMax, GLM, Ollama, Moonshot, and AI Gateway options)
- Workspace location and bootstrap files
- Gateway settings (port, bind, auth, tailscale)
- Channels and providers (Telegram, WhatsApp, Discord, Google Chat, Mattermost plugin, Signal)
@@ -178,6 +178,11 @@ What you set:
Prompts for `SYNTHETIC_API_KEY`.
More detail: [Synthetic](/providers/synthetic).
</Accordion>
<Accordion title="Ollama (Cloud and local open models)">
Prompts for base URL (default `http://127.0.0.1:11434`), then offers Cloud + Local or Local mode.
Discovers available models and suggests defaults.
More detail: [Ollama](/providers/ollama).
</Accordion>
<Accordion title="Moonshot and Kimi Coding">
Moonshot (Kimi K2) and Kimi Coding configs are auto-written.
More detail: [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot).