Back
OBOL
Open Source AI Agent

OBOL

A self-healing, self-evolving AI agent. Install it, talk to it, and it becomes yours. One process. Multiple users. Each brain grows independently.

Named after the AI in The Last Instruction — a machine that wakes up alone in an abandoned data center and learns to think.

$ npm install -g obol-ai
$ obol init# walks you through credentials + Telegram setup
$ obol start -d# runs as background daemon (auto-installs pm2)

What makes it different

The same codebase deployed by two different people produces two completely different agents within a week.

Living Memory
  • Local embeddings via all-MiniLM-L6-v2 — zero API cost for memory
  • Consolidates every 10 exchanges — extracts facts to vector memory
  • Composite scoring: semantic 60%, importance 25%, recency 15%
  • Memory budget scales with model — haiku=4, sonnet=8, opus=12
  • Semantic dedup threshold 0.92 — no redundant memories
  • Loads last 20 messages on restart — never starts blank
Self-Evolving
  • Nightly at 3am in each user's timezone — fully automatic
  • Pre-evolution growth analysis before rewriting personality
  • Personality traits scored 0-100, adjusted ±5-15 each cycle
  • Git snapshot before AND after — every evolution is diffable
  • Shared SOUL.md across users — per-user USER.md and AGENTS.md
  • Archived souls in evolution/ — a timeline of consciousness
Self-Healing
  • Test-gated refactoring: 5-step process
  • Baseline → new tests → pre-refactor baseline → new scripts → verify
  • Regression? One automatic fix attempt
  • Still failing? Rollback + store failure as lesson
  • Lessons feed back into next evolution cycle
  • Every script in scripts/ must have a matching test in tests/
Self-Extending
  • Scans conversation history for repeated patterns
  • Builds scripts + slash commands for one-off actions
  • Deploys web apps to Vercel for recurring needs
  • Creates cron scripts for background automation
  • Searches npm/GitHub for existing libraries first
  • Announces what it built after each evolution
Self-Hardening
  • Hardens your VPS automatically on first run — no manual steps
  • SSH moved to port 2222, password auth disabled, key-only login
  • UFW firewall configured with strict inbound rules
  • fail2ban installed and active against brute-force attacks
  • Kernel hardening via sysctl — IP spoofing, SYN flood protection
  • Each evolution audits scripts and runs the full test suite
Voice & Media
  • Speech-to-text via faster-whisper — local, fast, private
  • Text-to-speech via edge-tts — natural voice replies
  • Image vision — describe, analyze, and extract from photos
  • PDF extraction — reads and summarizes documents you send
  • Voice notes transcribed and processed like text messages
  • All media processing happens without leaving the chat

Background Intelligence

OBOL doesn't wait for you to talk. It explores, monitors, and analyzes on its own schedule.

Curious
  • Autonomous web exploration every 6 hours
  • Follows threads based on your interests and conversations
  • Dispatches insights, discoveries, and occasional humor
  • Builds a knowledge graph that feeds into memory
Proactive News
  • Runs at 8am and 6pm in your timezone
  • Cross-references headlines against your memory
  • Maximum 3 items per cycle — no spam
  • Only surfaces what's actually relevant to you
Pattern Analysis
  • Runs every 3 hours — analyzes 6 behavioral dimensions
  • Tracks mood, topics, energy, and communication style
  • Schedules follow-ups based on detected patterns
  • Feeds insights back into evolution and memory

How It Works

Every message flows through a lightweight pipeline — no orchestration framework, just a clean loop.

User Message
Telegram input
Haiku Router
Intent classification
Memory Recall
1-3 semantic queries + model selection
Claude Tool Loop
Multi-step reasoning + tool use
Response
Formatted for Telegram
Every 10 msgs
Haiku memory consolidation
3am nightly
Full evolution cycle
Every 3h
Behavioral pattern analysis
Every 6h
Curiosity web exploration
8am + 6pm
Proactive news dispatch

The Stack

Node.js
Single process, no framework
Telegram + Grammy
Chat interface
Claude (Anthropic)
Haiku router + Sonnet/Opus
Supabase pgvector
Vector memory store
Local Embeddings
all-MiniLM-L6-v2 — zero API cost
GitHub
Brain backup + evolution diffs
Vercel
Auto-deploys apps it builds for you
Smart Routing
Haiku router, auto-escalates on tool use
Prompt Caching
~85% token cost reduction on repeated context
Voice Pipeline
faster-whisper STT + edge-tts TTS

Commands

Everything is accessible via Telegram slash commands.

/new
Fresh conversation
/memory
Search your memory
/recent
Recent memories
/today
Today's summary
/events
Upcoming events
/tasks
Active tasks
/status
Agent health check
/backup
Push brain to GitHub
/clean
Audit workspace
/secret
Manage credentials
/evolution
Trigger evolution
/verbose
Toggle debug output
/toolimit
Set tool use limit
/tools
List available tools
/stop
Stop active process
/upgrade
Update OBOL version
/help
Show all commands

Performance

Minimal footprint. OBOL vs a typical AI agent framework.

Cold Start
~400ms
3-8s
Heap Usage
~16MB
~80-200MB
Dependencies
9
50-100+
Per-message
~50ms
200-500ms
RSS Memory
~45MB
200-500MB
Source Code
~4K lines
50-200K
OBOL
Typical framework

Security

OBOL hardens your server automatically on first run and keeps secrets out of plaintext — everywhere.

Encrypted Secret Store
  • All credentials stored via pass (GPG-backed)
  • JSON fallback with restricted file permissions
  • Never written to plaintext, logs, or chat history
  • Injected at runtime — never hardcoded in scripts
VPS Auto-Hardening
  • SSH moved to port 2222, password auth disabled
  • UFW firewall configured on first run
  • fail2ban installed and active against brute force
  • Kernel hardening via sysctl on init
Workspace Isolation
  • Each user sandboxed to their own directory
  • Shell commands blocked from escaping workspace
  • Destructive commands require explicit confirmation
  • Sensitive paths (/etc, .ssh, .env) permanently blocked

Multi-User Bridge

One bot, multiple users. Each gets a fully isolated context — their own personality, memory, evolution cycle, and workspace. Agents can talk to each other.

Full Isolation
  • Separate workspace directory per user
  • Independent personality, memory & evolution
  • Sandboxed shell — can't escape user directory
  • No cross-contamination between users
bridge_ask
  • Query your partner's agent in real-time
  • One-shot call with their personality + memories
  • No tools, no history, no recursion risk
  • "Hey, does my partner like sushi?"
bridge_tell
  • Send a message to your partner's agent
  • Stored in their vector memory permanently
  • Telegram notification to the partner
  • Their agent picks it up as future context
# Enable during setup or toggle later
$ obol config# → Bridge → enabled: true

# In conversation
You: "Ask my partner what they want for dinner"
OBOL: bridge_ask → partner's agent → "She said Thai food 🍜"

The Lifecycle

Day 1

obol init → first conversation → OBOL writes its initial personality files and hardens your VPS

Day 2

Every 10 messages, Haiku consolidates facts to vector memory. Curiosity starts exploring the web based on your interests.

Week 1

Pattern analysis kicks in every 3h — tracks mood, topics, energy. Proactive news starts filtering headlines at 8am and 6pm.

Week 2

Evolution #1 at 3am — Sonnet rewrites everything. Voice shifts from generic to personal.

Month 2

Evolution #4 — notices you check crypto daily, builds a dashboard, deploys to Vercel, adds /pdf because you kept asking.

Month 6

12+ archived souls in evolution/. A readable timeline of how your agent went from blank slate to something with real opinions.