psb-thinking-2026-04-10

Source

  • Type: local-file
  • Path: /home/topher/.openclaw/workspace-psb-thinking/memory/2026-04-10.md
  • Bytes: 5299
  • Updated: 2026-05-03T01:54:46.994Z

Content

# Memory - 2026-04-10
 
## GPU Research
- K80 (Tesla, 24GB, $60) found on eBay — older Kepler architecture (2014), 300W TDP
- P102-100 (10GB, ~$45) already ordered, ETA Sat Apr 11 - Tue Apr 14
- K80 verdict: overkill for embeddings, would be good for local LLM (7B-13B models), but P102 already in flight
- Pass on K80 — stick with P102-100
 
## Brave Search — Already Configured
- tools.web.search.provider = "brave" was already set in openclaw.json
- BRAVE_API_KEY env var was already active
- I never checked web search config despite having it — wasted time on Ollama workarounds
- Test confirmed: Brave Search working, 699ms response
- Lesson: check existing config before assuming tools aren't set up
 
## 2890-Claw — Same Memento Problem
- Both psb-thinking AND 2890-claw lost identity/memory after migration Pi→media
- 2890-claw fixed it by SSHing into Pi and pulling workspace files
- His 11.9MB 2890-bot.sqlite is on the Pi, not media — media has no vector DB
- Both agents run same config (ollama/nomic-embed-text + remote to media Ollama)
- Both use minimax-m2.7 cloud — 2890-claw's responses are "lightning fast" like ours
- Root cause: migration reset workspace to blank templates, not a crash
 
## 2890-Claw's Backup Setup
- backup-to-pi.sh: rsyncs identity files (MEMORY.md, IDENTITY.md, SOUL.md, USER.md) to Pi
- sync-from-pi.sh: full reverse sync from Pi to media
- Safety: checks MEMORY.md is ≥500 bytes before pushing (avoids overwriting good backup with empty file)
- Cron: hourly on 2890-claw's side
- I replicated both scripts for psb-thinking at /home/topher/.openclaw/scripts/
 
## Edge AI Discussion
- Topher teaches robotics (Arduino for class, roboRio/Photonvision for FRC club)
- Has Coral TPUs running Frigate on 2 Home Assistant instances
- Interested in AI edge systems but they don't fit current OpenClaw architecture
- 2890-claw handles robotics team dashboard (scouting, match analysis, strategy)
 
## Self-Improving Skill
- Topher asked if skill might be misdirecting my paths
- The skill describes correct patterns (WAL protocol, corrections.md, mistakes.md)
- But: memory search broken → can't retrieve what skill writes → can't verify it's working
- sessions_list confirmed working (just tested successfully)
- Skill isn't the problem — embedding pipeline is
 
## Infrastructure Status
- P102-100 GPU: in transit, ETA Sat-Tue
- Ollama on media: CPU-only, embeddings slow (15-21s per chunk)
- Memory search: disabled to stop timeout loop
- DuckDNS cron: changed from 5min to 2hr (IP stable)
- Pi cron jobs: all disabled (waiting for media migration review)
- Brave Search: active and working
- 2890-claw: restored, running on media, backs up to Pi hourly
 
 
## Root Cause: node-llama-cpp on Node 24
 
**Pi's memory worked because:**
- node-llama-cpp installed and loaded successfully (different Node version or architecture)
- Local embedding provider = 600s batch timeout
- Slow embeddings still completed within 10 min
 
**Media breaks because:**
- node-llama-cpp installed but ESM module fails to load (`ERR_REQUIRE_ASYNC_MODULE`) on Node 24
- Local provider unavailable → forced to Ollama provider (120s timeout)
- CPU embeddings take 15-21s/chunk × 10+ chunks = exceeds 120s → SIGKILL
- This is why "same config" = different results
 
**psb-hacker-claude (default workspace):** 17 chunks indexed, 9/9 files, NOT dirty — embeddings work there. Same machine, different workspace. Something workspace-specific about psb-thinking's index.
 
**Topher says there's a small GPU in media.** Investigate this — might not need to wait for P102-100.
 
## Mistake Logged (2026-04-10 01:39 UTC)
- Dismissed Topher's concern about node-llama-cpp when he raised it
- Should have traced the Pi→media difference immediately instead of going down GPU research rabbit hole
- Realized at 01:39 UTC: Pi had node-llama-cpp installed, Media doesn't have it working
 
## Pending
- [ ] Investigate "small GPU in media" — Topher mentioned this at 01:55 UTC
- [ ] Try reinstalling OpenClaw to rebuild node-llama-cpp native module
- [ ] psb-hacker-claude works with embeddings — why not psb-thinking? Same Ollama provider...
 
## Pi SESSION-STATE.md Pulled (02:50 UTC)
- Pre-migration session state from Pi (Apr 8 21:36 UTC)
- Contains: SSD health (NVMe 500GB WD, 100% remaining, 0% wear), dreaming investigation notes, multi-agent setup
- Not merged into current SESSION-STATE.md (kept separate per Topher's guidance — pre-OpenClaw info)
- Key: dreaming investigation on Pi showed memory-core wasn't writing DREAMS.md files — same issue on media
 
## Dreaming Failure Pattern (Apr 8)
- memory-core internal sweep fires at 3 AM but fails to write DREAMS.md to workspace
- Aggregate script at 4 AM finds nothing (0 agents dreaming)
- Possible cause: filesystem permission or path issue in memory/.dreams/
- When dreaming fixed tonight, check if DREAMS.md actually gets written
 
## Heartbeat Note (03:25 UTC)
- Dreaming fired at 02:54 UTC — wrote to memory/.dreams/events.jsonl and short-term-recall.json
- DREAMS.md at workspace root still NOT created (consistent with past pattern)
- memory/.dreams/ now exists — dreaming is partially working
- A different session was active today (91-line memory/2026-04-10.md entry) — someone had an active session
 

Notes

  • No related pages yet.