Skip to content

Instantly share code, notes, and snippets.

View pdjohntony's full-sized avatar

Phill Johntony pdjohntony

  • CDW
  • Youngstown, OH
  • 12:47 (UTC -05:00)
View GitHub Profile
@pahud
pahud / comparision.md
Last active February 15, 2026 13:43
~/.openclaw/openclaw.json for Aamzon Bedrock API KEY
> ┌─────────────────────────────────────────────────────────────────────────────┐
│                           之前:API Key 方式                                 │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   openclaw.json                                                             │
│   ┌─────────────────────────────┐                                           │
│   │ apiKey: ${AWS_BEARER_TOKEN} │──────┐                                    │
│   └─────────────────────────────┘      │                                    │
│                                        ▼                                    │
@Hegghammer
Hegghammer / working_moltbot_ollama_config.md
Last active February 11, 2026 22:45
Working Clawdbot/Moltbot setup with local Ollama model

Working Clawdbot/Moltbot setup with local Ollama model

[Update 2026-02-02: nemotron-3-nano also performs well on same setup; see comment below]

This is a guide to setting up Clawdbot/Moltbot with a local Ollama model that actually works -- meaning it has good tool use and decent speed. The main requirement is 48GB of VRAM. I have yet to find a model that fits on less than this and still works on Moltbot.

The setup involves creating a tool-tuned variant of qwen2.5:72b and modifying a range of configs in Moltbot. At the end you'll get a local Moltbot instance that can use tools (exec, read, write, web search), read skills, and perform agentic tasks without any cloud API dependencies. On my system I get ~16 t/s and have yet to come across a tool/skill that my bot can't use.

Claude Opus wrote the first draft of this Gist, then I (a human) checked and edited it.

@shawnyeager
shawnyeager / opencode-systemd-tailscale.md
Last active February 13, 2026 16:58
Run OpenCode as a persistent systemd service with Tailscale access

OpenCode Web Server Setup

Run OpenCode as a persistent background service, accessible from any device via Tailscale.

Why?

  • Access from anywhere — Start a task from your phone, check results from your laptop
  • Sessions persist — Close the browser, come back later, your session is still there
  • Multiple clients — Terminal TUI and browser can connect to the same session simultaneously
  • Survives crashes — systemd restarts the server automatically