fix(doctor): resolve false positive for local memory search when no explicit modelPath (#32014)

* fix(doctor): resolve false positive for local memory search when no explicit modelPath

When memorySearch.provider is 'local' (or 'auto') and no explicit
local.modelPath is configured, the runtime auto-resolves to
DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However,
the doctor's hasLocalEmbeddings() check only inspected the config
value and returned false when modelPath was empty, triggering a
misleading warning.

Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching
the runtime behavior in createLocalEmbeddingProvider().

Closes #31998

* fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only

Address review feedback: canAutoSelectLocal() in the runtime skips
local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL
fallback should only apply when provider is explicitly 'local', not
when provider is 'auto' — otherwise users with no local file and no
API keys would get a clean doctor report but no working embeddings.

Add useDefaultFallback parameter to hasLocalEmbeddings() to
distinguish the two code paths.

* fix: preserve gateway probe warning for local provider with default model

When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback,
also check the gateway memory probe if available. If the probe reports
not-ready (e.g. node-llama-cpp missing or model download failed),
emit a warning instead of silently reporting healthy.

Addresses review feedback about bypassing probe-based validation.

* fix: add changelog attribution for doctor local fallback fix (#32014) (thanks @adhishthite)

---------

Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
This commit is contained in:
Adhish Thite
2026-03-03 00:05:40 +05:30
committed by GitHub
parent 534168a7a7
commit 63734df3b0
3 changed files with 95 additions and 6 deletions

View File

@@ -52,6 +52,7 @@ Docs: https://docs.openclaw.ai
- Telegram: guard duplicate-token checks and gateway startup token normalization when account tokens are missing, preventing `token.trim()` crashes during status/start flows. (#31973) Thanks @ningding97.
- Skills/sherpa-onnx-tts: run the `sherpa-onnx-tts` bin under ESM (replace CommonJS `require` imports) and add regression coverage to prevent `require is not defined in ES module scope` startup crashes. (#31965) Thanks @bmendonca3.
- Browser/default profile selection: default `browser.defaultProfile` behavior now prefers `openclaw` (managed standalone CDP) when no explicit default is configured, while still auto-provisioning the `chrome` relay profile for explicit opt-in use. (#32031) Fixes #31907. Thanks @liuxiaopai-ai.
- Doctor/local memory provider checks: stop false-positive local-provider warnings when `provider=local` and no explicit `modelPath` is set by honoring default local model fallback while still warning when gateway probe reports local embeddings not ready. (#32014) Fixes #31998. Thanks @adhishthite.
- Sandbox/Docker setup command parsing: accept `agents.*.sandbox.docker.setupCommand` as either a string or a string array, and normalize arrays to newline-delimited shell scripts so multi-step setup commands no longer concatenate without separators. (#31953) Thanks @liuxiaopai-ai.
- Gateway/Plugin HTTP route precedence: run explicit plugin HTTP routes before the Control UI SPA catch-all so registered plugin webhook/custom paths remain reachable, while unmatched paths still fall through to Control UI handling. (#31885) Thanks @Sid-Qin.
- Security/Node exec approvals: preserve shell/dispatch-wrapper argv semantics during approval hardening so approved wrapper commands (for example `env sh -c ...`) cannot drift into a different runtime command shape, and add regression coverage for both approval-plan generation and approved runtime execution paths. Thanks @tdjackey for reporting.

View File

@@ -60,6 +60,61 @@ describe("noteMemorySearchHealth", () => {
resolveMemoryBackendConfig.mockReturnValue({ backend: "builtin", citations: "auto" });
});
it("does not warn when local provider is set with no explicit modelPath (default model fallback)", async () => {
resolveMemorySearchConfig.mockReturnValue({
provider: "local",
local: {},
remote: {},
});
await noteMemorySearchHealth(cfg, {});
expect(note).not.toHaveBeenCalled();
});
it("warns when local provider with default model but gateway probe reports not ready", async () => {
resolveMemorySearchConfig.mockReturnValue({
provider: "local",
local: {},
remote: {},
});
await noteMemorySearchHealth(cfg, {
gatewayMemoryProbe: { checked: true, ready: false, error: "node-llama-cpp not installed" },
});
expect(note).toHaveBeenCalledTimes(1);
const message = String(note.mock.calls[0]?.[0] ?? "");
expect(message).toContain("gateway reports local embeddings are not ready");
expect(message).toContain("node-llama-cpp not installed");
});
it("does not warn when local provider with default model and gateway probe is ready", async () => {
resolveMemorySearchConfig.mockReturnValue({
provider: "local",
local: {},
remote: {},
});
await noteMemorySearchHealth(cfg, {
gatewayMemoryProbe: { checked: true, ready: true },
});
expect(note).not.toHaveBeenCalled();
});
it("does not warn when local provider has an explicit hf: modelPath", async () => {
resolveMemorySearchConfig.mockReturnValue({
provider: "local",
local: { modelPath: "hf:some-org/some-model-GGUF/model.gguf" },
remote: {},
});
await noteMemorySearchHealth(cfg, {});
expect(note).not.toHaveBeenCalled();
});
it("does not warn when QMD backend is active", async () => {
resolveMemoryBackendConfig.mockReturnValue({
backend: "qmd",
@@ -164,7 +219,7 @@ describe("noteMemorySearchHealth", () => {
expect(message).not.toContain("openclaw auth add --provider");
});
it("uses model configure hint in auto mode when no provider credentials are found", async () => {
it("warns in auto mode when no local modelPath and no API keys are configured", async () => {
resolveMemorySearchConfig.mockReturnValue({
provider: "auto",
local: {},
@@ -173,10 +228,12 @@ describe("noteMemorySearchHealth", () => {
await noteMemorySearchHealth(cfg);
// In auto mode, canAutoSelectLocal requires an explicit local file path.
// DEFAULT_LOCAL_MODEL fallback does NOT apply to auto — only to explicit
// provider: "local". So with no local file and no API keys, warn.
expect(note).toHaveBeenCalledTimes(1);
const message = String(note.mock.calls[0]?.[0] ?? "");
expect(message).toContain("openclaw configure --section model");
expect(message).not.toContain("openclaw auth add --provider");
});
});

View File

@@ -5,6 +5,7 @@ import { resolveApiKeyForProvider } from "../agents/model-auth.js";
import { formatCliCommand } from "../cli/command-format.js";
import type { OpenClawConfig } from "../config/config.js";
import { resolveMemoryBackendConfig } from "../memory/backend-config.js";
import { DEFAULT_LOCAL_MODEL } from "../memory/embeddings.js";
import { note } from "../terminal/note.js";
import { resolveUserPath } from "../utils.js";
@@ -42,8 +43,26 @@ export async function noteMemorySearchHealth(
// If a specific provider is configured (not "auto"), check only that one.
if (resolved.provider !== "auto") {
if (resolved.provider === "local") {
if (hasLocalEmbeddings(resolved.local)) {
return; // local model file exists
if (hasLocalEmbeddings(resolved.local, true)) {
// Model path looks valid (explicit file, hf: URL, or default model).
// If a gateway probe is available and reports not-ready, warn anyway —
// the model download or node-llama-cpp setup may have failed at runtime.
if (opts?.gatewayMemoryProbe?.checked && !opts.gatewayMemoryProbe.ready) {
const detail = opts.gatewayMemoryProbe.error?.trim();
note(
[
'Memory search provider is set to "local" and a model path is configured,',
"but the gateway reports local embeddings are not ready.",
detail ? `Gateway probe: ${detail}` : null,
"",
`Verify: ${formatCliCommand("openclaw memory status --deep")}`,
]
.filter(Boolean)
.join("\n"),
"Memory search",
);
}
return;
}
note(
[
@@ -135,8 +154,20 @@ export async function noteMemorySearchHealth(
);
}
function hasLocalEmbeddings(local: { modelPath?: string }): boolean {
const modelPath = local.modelPath?.trim();
/**
* Check whether local embeddings are available.
*
* When `useDefaultFallback` is true (explicit `provider: "local"`), an empty
* modelPath is treated as available because the runtime falls back to
* DEFAULT_LOCAL_MODEL (an auto-downloaded HuggingFace model).
*
* When false (provider: "auto"), we only consider local available if the user
* explicitly configured a local file path — matching `canAutoSelectLocal()`
* in the runtime, which skips local for empty/hf: model paths.
*/
function hasLocalEmbeddings(local: { modelPath?: string }, useDefaultFallback = false): boolean {
const modelPath =
local.modelPath?.trim() || (useDefaultFallback ? DEFAULT_LOCAL_MODEL : undefined);
if (!modelPath) {
return false;
}