2026-04-20-model-check

Source

  • Type: local-file
  • Path: /home/topher/.openclaw/workspace-crash-bot/memory/2026-04-20-model-check.md
  • Bytes: 1225
  • Updated: 2026-05-03T01:02:46.917Z

Content

# 2026-04-20 — Model Check Session
 
## What Happened
-topher noticed he was on minimax-m2.7 instead of the configured GLM-5.1:cloud.
 
## Investigation
- Both crash-bot and psb-thinking have identical model config: `glm-5.1:cloud` primary, `minimax-m2.7` fallback
- Earlier curl tests showed timeouts to both cloud endpoints — transient outage
- Later curl test confirmed GLM-5.1:cloud IS alive and responding on `https://ollama.com/v1/chat/completions`
- minimax also works but was the fallback during the outage window
- Confirmed: `api.ollama.com` redirects to `ollama.com` — endpoint shifted
 
## Key Learnings
- GLM and minimax are both working cloud models
- -topher likes to experiment with new models as they drop — GLM is the lab rat, minimax is the safety net
- `/new` spins up a fresh session that reads current config (GLM primary)
- Current session was probably locked to minimax during GLM's transient outage window
 
## Model Status
- crash-bot: ollama/glm-5.1:cloud (primary) / ollama/minimax-m2.7 (fallback) ✅
- psb-thinking: same config ✅
- Local Ollama: only embedding models (all-minilm, nomic-embed-text) — no local chat models
 
## No config changes made — everything is working as intended
 

Notes

  • No related pages yet.