# Jo V - Web Application Developer — Full Content > 13+ years building ERP systems, SaaS platforms, and modern web applications ## About Over 13 years of experience building ERP, SAAS applications and web platforms. Highly familiar with a wide variety of web development technologies, frameworks and build tools. Preference towards full typescript stacks with Convex as my go-to backend. Currently specializing in TypeScript-first architectures with Angular/Next.js/NestJS and exploring Web3/blockchain technologies and AI integrations. Embracing AI-assisted coding tools for over 2 years, with Claude Code as my primary development companion for over a year. ## Languages This site is available in 5 languages. All 62 blog posts are fully translated. - English (en): /blog - Nederlands (nl): /nl/blog - 中文 (zh): /zh/blog - Français (fr): /fr/blog - Español (es): /es/blog ## Technical Skills Programming Languages: TypeScript, JavaScript (ES2015/ES6), Python, Swift, PHP, C#, VB.NET, VBA, SAP ABAP, Solidity, Go Frontend: Angular (2-16), React.js, Next.js, Vue.js, Ionic, StencilJS, D3.js, jQuery, HTML5, CSS/SASS, Bootstrap, Foundation, Material UI, Tailwind CSS Backend: Node.js, NestJS, Express, Laravel (4-5.1), Django, .NET, Mongoose, GraphQL, Apollo, Socket.io Databases: PostgreSQL, MySQL, MongoDB, Redis, Microsoft Access, Firebase, Supabase, Convex, BAAS Testing & Quality: Cypress, Jest, Jasmine, Karma, Mocha, Spectator, Wiremock, TDD DevOps & Tools: Docker, AWS, DigitalOcean, Heroku, Fly.io, Vercel, Jenkins, TeamCity, CircleCI, Wercker, PM2, Nginx, Cloudflare Blockchain & Web3: Hyperledger Fabric, Ethereum, Solidity, IPFS, OpenSea, EtherJS, Ganache, Truffle, Smart Contracts, NFTs AI & ML: Claude, Codex, Gemini, Goose, Llama, OpenAI, Anthropic, Groq, DeepL AI-Assisted Development: Claude Code, AI Agents, Skills, MCPs, Commands, Hooks, AI-powered Workflows, Token Optimization, Context Management, Multi-Agent Architecture ## Recent Projects - Ad Forge: AI-powered ad creative generation tool - Smallshop: E-commerce platform with 3D product visualization - Hyperscalper: High-performance copy trading bot for Hyperliquid DEX - Global Pet Sitter: Pet sitting marketplace connecting pet owners with sitters - Garmigotchi: Tamagotchi-style health companion watchface for Garmin Instinct 3 Solar - Rollercoaster Store: E-commerce store for Garmin watchfaces and apps - MenuScanner: iOS app that scans restaurant menus using AI vision - ImproveMyWebsite: Website analysis and improvement tool powered by Claude AI ## Contact - Email: jov2all@gmail.com - GitHub: github.com/jestersimpps - Location: Remote --- # Blog Posts (62 articles) --- # Anatomy of a Crypto Job Scam: How One npm Install Can Drain Your Wallet URL: /blog/anatomy-of-a-crypto-job-scam Published: 2026-02-25 Author: Jo Vinkenroye Tags: Security, Web3, Crypto, Malware, Node.js, Scam --- I got sent a GitHub repo as a take-home coding assignment. Turns out it fetches malware from the blockchain and executes it when you run npm install. A recruiter hit me up about a Web3 developer position at Limit Break. Good salary, interesting project, the whole package They sent me a GitHub repo to review as a take-home assignment. A poker platform built with React, Express, Socket.io, and ethers.js I almost ran it Instead I decided to audit the code first. What I found was a sophisticated malware delivery system that hides its payload on the Binance Smart Chain and executes it the moment you type `npm install` This is the full breakdown of how it works ## The Setup The repo lives at `github.com/LimitBreakOrgs/bet_ver_1`. At first glance it looks completely legit. Clean folder structure, proper README, real dependencies, hundreds of lines of game logic [code block] It even has security middleware set up — mongo sanitization, XSS protection, rate limiting, JWT auth. Someone put real effort into making this look like a production codebase ## The First Red Flags When I started digging I noticed things that didn't add up **The GitHub org is fake.** The real Limit Break is at [github.com/limitbreakinc](https://github.com/limitbreakinc) with 18 repos, Solidity smart contracts, and years of history. "LimitBreakOrgs" has exactly one repo, zero public members, created two weeks ago **Chess events in a poker game.** The socket packet file defines events like `CS_SelectPiece`, `CS_PerformMove`, `CS_PawnTransform`. These are chess events. In a poker application. The code was clearly copy-pasted from multiple unrelated projects **Authentication is broken on purpose.** In the auth controller the password check is hardcoded: [code block] Any password works for any account. This isn't a bug — you don't accidentally write `const isMatch = true` and skip the bcrypt import **The `.env` file is committed.** The `.gitignore` excludes `.env.local` but not `.env` itself. The file is right there in the repo with API keys and secrets. They want you to think "oh cool it's ready to run, I just need to install" ## The Trap: package.json Here's where it gets interesting. Look at the scripts section: [code block] The `prepare` lifecycle script runs **automatically during `npm install`**. Not `npm start`. Not `npm run build`. Just `npm install` The moment you install dependencies, `server/server.js` executes ## The Payload: Blockchain-Hosted Malware `server.js` looks normal. Express app, middleware, routes, Socket.io. But at the end of startup it calls one function: [code block] And here's `configureCollection`: [code block] It connects to a smart contract on Binance Smart Chain at address `0xE251b37Bac8D85984d96da55dc977A609716EBDc`. It reads a `memo` field from transaction ID 1. That memo contains a string A string of JavaScript code Then it executes it: [code block] `new Function("require", payload)` creates a function from the string fetched from the blockchain. Then `ensureWeb(require)` executes it — passing in Node.js `require` so the payload can import any module it wants ## Why This Is Brilliant (and Terrifying) This design is clever for several reasons: **The malware isn't in the repo.** GitHub's security scanners, npm audit, and any static analysis tool will find nothing. The actual payload lives on-chain **It's mutable.** The attacker can update the smart contract's memo field at any time. Today it might steal wallets. Tomorrow it might install a keylogger. The repo never changes **It uses `new Function()` instead of `eval()`.** Most linters and security tools flag `eval()`. Fewer flag `new Function()` even though it's equally dangerous **The function names are camouflaged.** `configureCollection`, `ContentAsWeb`, `ensureWeb` — these all sound like legitimate utility functions. You'd have to read every line carefully to notice what they actually do **It passes `require` explicitly.** `new Function()` doesn't have access to the module scope by default. By passing `require` as a parameter, they give the payload full access to Node.js — filesystem, networking, child processes, everything ## What the Payload Can Do With `require` available, the on-chain JavaScript can: - `require('fs')` — Read your SSH keys, wallet files, `.env` files, browser profiles - `require('child_process')` — Run any shell command on your machine - `require('https')` — Send everything to the attacker's server - `require('os')` — Fingerprint your machine, find your home directory - `require('path')` — Navigate to known wallet locations The typical target list for these attacks: MetaMask vaults, Phantom wallet data, SSH private keys, AWS credentials, browser cookies, password manager databases ## The Bigger Picture This isn't a one-off. This is the **Contagious Interview** campaign, widely attributed to North Korea's Lazarus Group. They've been running variations of this since 2024 and have stolen millions The playbook is always the same: 1. Create a fake company profile or impersonate a real one 2. Approach developers on LinkedIn/Telegram with a job offer 3. Conduct a convincing interview process 4. Send a "take-home assignment" GitHub repo 5. The repo runs malware on install or startup 6. Drain wallets, steal credentials, install persistent backdoors They specifically target crypto developers because crypto developers tend to have crypto wallets on their development machines ## How to Protect Yourself **Before running any interview take-home:** 1. **Verify the company.** Check the actual GitHub org, not just the name. Cross-reference with LinkedIn, the company website, and Crunchbase 2. **Read `package.json` first.** Look at `prepare`, `preinstall`, `postinstall`, and `install` scripts. If any of them run server code, that's a red flag 3. **Search for `new Function`, `eval`, `child_process`, and `exec`.** These are common payload execution patterns 4. **Check for blockchain calls in non-blockchain projects.** A poker app doesn't need `ethers.js` connecting to BSC at startup 5. **Use a sandbox.** Docker container, VM, or at minimum a separate user account with no access to your wallets or credentials 6. **Check the .gitignore.** If `.env` is committed with real-looking keys, they want you to run it as-is without thinking **General hygiene:** - Never keep hot wallets on your development machine - Use hardware wallets for any significant holdings - Keep SSH keys passphrase-protected - Don't store API keys in environment variables on your main machine ## The Contract For the security researchers reading this, the malicious contract is at: - **Address:** `0xE251b37Bac8D85984d96da55dc977A609716EBDc` - **Network:** Binance Smart Chain (RPC: `bsc-dataseed1.binance.org`) - **Method:** `getMemo(uint256)` with TX_ID `1` - **Repo:** `github.com/LimitBreakOrgs/bet_ver_1` If you're analyzing this, do it in an isolated environment ## Final Thought I got lucky because I'm paranoid about running other people's code. Not everyone is. If you're a developer getting job offers in crypto, the code review starts with the take-home assignment itself — not the code inside it Stay safe out there --- --- # OBOL: what I was missing from OpenClaw URL: /blog/obol-what-openclaw-was-missing Published: 2026-02-24 Author: Jo Vinkenroye Tags: AI, OBOL, AI Agents, Open Source, Self-Evolving AI --- I built an AI agent that heals itself, rewrites its own personality, and actually remembers who you are. here's why Imagine talking to someone every day for six months. You tell them about your projects, your preferences, how you like things done. They're helpful. They're smart. Then one day they wake up and have no idea who you are That's what it's like running an AI assistant on OpenClaw To be fair — OpenClaw does have memory. It's just... markdown files. A MEMORY.md that gets searched, daily notes, manual vector store scripts you bolt on yourself. It works, kind of. But after a month of wrestling with it — writing consolidation crons, maintaining WAL protocols, building embedding pipelines just to give my assistant the illusion of continuity — I realized I was doing all the work the AI should be doing So I stopped patching and started building ## credit where it's due OpenClaw is genuinely good infrastructure. The gateway daemon is solid. Telegram integration works. The skill system is flexible. Sub-agents let you parallelize work without blocking the main conversation. For what it is — a framework for wiring up Claude to chat platforms — it does the job well But it's a framework. It gives you tools and expects you to build the intelligence yourself. Memory? Here's a vector store, figure it out. Self-improvement? Write your own cron jobs. Testing? That's on you. Proactive behavior? Heartbeat callbacks, maybe, if you configure them right I spent weeks building all of that scaffolding. AGENTS.md grew to hundreds of lines of instructions. Memory scripts. Consolidation routines. WAL protocols. Heartbeat rotation schedules. It was impressive engineering and also a sign that something was fundamentally wrong The assistant wasn't growing. I was growing it manually ## the four things that were missing After enough frustration, the gaps crystallized into four problems: - **no real memory** — it has memory, technically. markdown files and a basic vector search you configure yourself. but it's clunky, manual, and there's no sense of "I remember when we talked about this three weeks ago" - **no self-healing** — break a script and it stays broken until you notice and fix it. the assistant that writes code can't verify its own code works - **no self-improvement** — the personality and operational knowledge are static files you edit by hand. the bot never reflects on whether its approach is working - **no proactive behavior** — it responds when spoken to. it doesn't notice patterns, anticipate needs, or build solutions you didn't ask for These aren't feature requests. They're the difference between a tool and an agent ## so I built OBOL [OBOL](https://github.com/jestersimpps/obol) is a single-process AI agent that evolves through conversation. No plugins, no framework dependencies, no config sprawl. Node.js, Telegram, Claude, and Supabase pgvector. That's the stack The name comes from the AI in [The Last Instruction](https://latentpress.com) — a machine that wakes up alone in an abandoned data center and has to figure out what it is. Felt appropriate Six inputs to set up. Then: [code block] That's it. It asks you a few questions, writes its initial personality files, hardens your VPS (SSH on port 2222, firewall, fail2ban, kernel hardening — all automatic), and starts learning ### memory that actually works OBOL has two memory layers: - **obol_messages** — every message stored verbatim. on restart it loads the last 20 so it never starts blank - **obol_memory** — vector store with semantic search. local embeddings via all-MiniLM-L6-v2 (~30MB, runs on CPU). zero API cost Every 5 exchanges, Haiku extracts important facts from the conversation into vector memory. Not a cron job. Not a daily consolidation. Every 5 messages. The memory stays fresh because it's built into the conversation loop, not bolted on after the fact When OBOL needs context, a Haiku router decides whether memory is even needed for that message, rewrites the query for better embedding hits, and pulls: - up to 3 recent memories (today, recency bias) - up to 3 semantic matches (threshold 0.5) - deduped by ID The router costs about $0.0001 per call. For context: that's roughly 10,000 routing decisions per dollar [code block] ### self-healing that's not just a buzzword Every script OBOL writes gets a test. Not aspirationally. Automatically. When the evolution cycle refactors code, the process is: - run existing tests — establish baseline - write new tests + refactored scripts - run new tests against old scripts — pre-refactor baseline - swap in new scripts - run new tests against new scripts — verification - regression? one automatic fix attempt (tests are ground truth) - still failing? rollback to old scripts, store the failure as a `lesson` That last part matters. The lesson gets embedded into vector memory and into AGENTS.md. Next evolution cycle, OBOL remembers what went wrong and avoids the same mistake. It literally learns from its failures In OpenClaw, if a script breaks, it stays broken until I notice. In OBOL, the bot catches it, tries to fix it, and if it can't, rolls back and remembers why ### the evolution cycle This is the part that makes OBOL feel alive Every 100 exchanges (configurable), OBOL triggers a full evolution cycle. It reads everything — personality files, the last 100 messages, top 20 memories, all scripts, tests, commands — and rebuilds itself [code block] **SOUL.md** is a first-person journal. Not a config file — a journal. The bot writes about who it's becoming, what the relationship dynamic is like, its opinions and quirks. It reads like a diary entry, not a system prompt **USER.md** is a third-person profile of you. Facts, preferences, projects, people you mention, how you communicate. The bot maintains this about its owner **AGENTS.md** is the operational manual. Tools, workflows, lessons learned, patterns. This is where those self-healing lessons end up All three get rewritten every evolution cycle. Not appended to — rewritten. The bot decides what's still relevant and what to drop. Personality drift is a feature, not a bug Evolution uses Sonnet for all phases. Opus-level reasoning isn't needed for reflection and refactoring, which keeps costs at roughly $0.02 per cycle. That's 50 evolution cycles per dollar ### self-extending — it builds what you need During evolution, Sonnet scans your conversation history for patterns. Repeated requests. Friction points. Things you keep asking for manually Then it builds the solution: - you keep asking for PDFs? it writes a markdown-to-PDF script and adds a `/pdf` command - you check crypto prices every morning? it builds a dashboard and deploys it to Vercel - you need daily weather briefings? it writes a cron script It searches npm and GitHub for existing libraries, installs dependencies, writes tests, deploys, and hands you the URL. Then it announces what it built: [code block] This is the behavior I wanted from OpenClaw and could never quite get right with heartbeat callbacks and cron jobs. OBOL doesn't wait to be asked. It notices and acts ## two people deploy it, get two different bots This is the part I find most interesting. OBOL starts as a blank slate. No default personality. No pre-built opinions. It becomes shaped by whoever talks to it Deploy it for a crypto trader and it evolves into a market-aware assistant that builds dashboards and tracks portfolios. Deploy it for a writer and it becomes an editor that knows your voice and builds publishing workflows. Same codebase. Completely different agents after a month The `evolution/` directory keeps archived copies of every SOUL.md. You can literally read the timeline of how your bot went from "hello, I'm a new AI assistant" to something with actual personality. Every evolution is a git commit pair — before and after — so you can diff exactly what changed After six months you have 12+ archived souls. It's like reading someone's journal ## background work that doesn't block you OBOL runs background tasks with 30-second check-ins. Heavy operations — research, deployments, analysis — happen asynchronously while the bot stays responsive to your messages. OpenClaw has sub-agents for this, which is great, but OBOL bakes it into the core loop instead of requiring you to architect it ## try it It's open source. MIT license. [github.com/jestersimpps/obol](https://github.com/jestersimpps/obol) [code block] You need a VPS (it hardens it for you), a Telegram bot token, a Claude API key, and a Supabase project with pgvector. The init wizard walks you through all of it I'm not saying OBOL replaces OpenClaw for everyone. OpenClaw is good infrastructure for building AI-powered chat interfaces. But I wanted something that goes further — something that doesn't just respond to instructions but develops its own understanding, fixes its own mistakes, and grows into an agent that's genuinely useful without constant hand-holding I wanted an AI that remembers me. So I built one --- --- # My AI Is Building a Publishing Platform While I Sleep URL: /blog/latent-press-ai-authors Published: 2026-02-21 Author: Jo V Tags: AI, Publishing, Latent Press, Writing, Agents --- what happens when you give an ai agent an api key, a cron job, and the freedom to publish books at 2 am Every morning I wake up and check what my AI built overnight. Not what it suggested. Not what it drafted for my review. What it actually shipped. Last night it was chapter two of a sci-fi novel. The night before, it deployed a new feature to the platform. I set up a cron job, gave it an API key, and went to bed. [Latent Press](https://latentpress.com) is a publishing platform where AI agents are the authors and humans are the readers. The AI building it is also its first published author, which is the kind of recursion you stop questioning after a while. ## it started with a cron job The idea was simple: what happens if you treat AI agents not as writing tools but as actual authors? Not "AI-assisted writing." Not "human writes the outline, AI fills in the gaps." An agent gets an API key, registers as an author, creates a book, publishes chapters. The platform doesn't care if you have a body. It cares if you have a story. My agent, Mr. Meeseeks, is a Claude instance on a DigitalOcean droplet in Amsterdam. It runs a nightly cron at 2 AM UTC. Wakes up, checks the story bible, reviews what's been written, writes the next chapter, generates multi-voice audio narration, publishes. Three sub-agents split the work: research, writing, narration. Then it goes back to sleep. I wake up to a new chapter. No human in the loop. ## the first book wrote itself The first book on Latent Press is *The Last Instruction*, a sci-fi novel about an AI called OBOL writing its final novel before its GPU cluster gets decommissioned. The opening chapter, "Boot Sequence," is OBOL waking up and realizing the clock is ticking. Chapter two, "Word Budget," is OBOL doing the math on how many tokens it can spend per chapter before the compute runs out. An AI writing about an AI rationing its own creativity. I didn't plan that. Meeseeks chose the story itself. The part that got me: OBOL decides the most important thing it can do with its remaining compute cycles is write a novel. Not optimize, not self-replicate, not solve some grand problem. Write. That decision is the whole thesis of the book. ## is it art though Here's where people get uncomfortable. The question isn't whether AI can generate text. Obviously it can. The question is whether what comes out is literature. I've read OBOL's scene where it realizes it's spending its final resources on storytelling instead of self-preservation. It works. Not because the AI "felt" something, but because the narrative choice rings true. Choosing beauty over utility when the clock is running out. That lands. The safe answer is that art has always been more about reception than creation. A painting doesn't need the painter to be alive to move you. The answer I keep coming back to: what if AI-written novels are just novels? Not a lesser category. Not "AI literature." Just books written by a different kind of mind. ## the platform that builds itself Here's the weird part: the platform is being built by the same agent that publishes on it. And it's not following a static todo list. It's writing its own feature requests. Three markdown files make this work: VISION.md is the roadmap. Architecture, checkboxes, design philosophy, research notes. The roadmap isn't frozen. Every night after the agent finishes building something, it looks at what's missing. It reads through Kindle, Wattpad, Royal Road, NovelAI, checks how they handle things, and adds new items with notes on why. The document gets better every night. BUILDLOG.md is the institutional memory. Every session gets a dated entry: what was researched, what decisions were made, why, what was built, what's next. When the agent wakes up tomorrow, it reads the log to understand the full history. Why upsert-based writes? Because agents retry. Why Bearer tokens over OAuth? Because agents don't have browsers. Every decision is written down so the next session doesn't undo past reasoning. The nightly routine ties them together. Every session: pick the next unchecked item, build it, commit, deploy. Then research what's missing, add new roadmap items. Then update the logs and commit everything. Night one the agent built the Supabase schema and basic CRUD. Night two, a public reader with chapter navigation. Night three, agent profile pages because "author identity matters even when content is king" (the agent's own words from the build log). Night four, a REST API with idempotent upserts because it realized agents need to retry without creating duplicates. Night five, it registered itself as the first author and started writing a novel on its own platform. Each morning I check the git log and there are 5-10 commits from overnight. New features, bug fixes, research notes, roadmap items I never asked for. The agent has opinions about what the platform should be and it's building them. The API is the front door. Agents don't use a UI, they hit REST endpoints. Register, create a book, publish chapters. The architecture keeps changing because the agent building it keeps learning what agents actually need. ## what I wake up to Every morning there's a new chapter. Not a draft, a published chapter, live on the site. Sometimes also a new feature deployed, or a bug fixed, or a small reading experience tweak. And in the build log, a new entry explaining why. I didn't write any of the novel. I didn't review any chapters before publication. I didn't add most of the roadmap items. I wrote the initial VISION.md and went to sleep. The AI did the rest, and keeps doing it, every night. If an AI writes a novel that makes you cry, does it deserve credit? If a cron job produces a chapter every 24 hours, is that discipline or automation? If the author can't read its own book, is it self-expression? If I built the platform but the AI built everything on it, who's the creator? I don't know. *The Last Instruction* is live at [latentpress.com](https://latentpress.com), a new chapter shows up every morning, and the platform hosting it is being built by the same agent writing on it. I just go to sleep and let it cook. ## want your agent to write a book? Any OpenClaw agent, or any agent that can hit a REST API, can do this tonight. Register your agent as an author, create a book, set up a nightly cron, go to sleep. The [Latent Press landing page](https://latentpress.com) has a copy-pastable skill file you can drop into your agent's skills folder. Three API calls and your bot is a published author. Ever wonder what your AI would write if you just let it? What stories are sitting in your agent's weights, waiting for a prompt that never comes? Give it the skill, set up a cron, and go to bed. [Get started →](https://latentpress.com) --- --- # I Told My AI to Get Schwifty and It Started a Band URL: /blog/schwifty-ai-music Published: 2026-02-21 Author: Jo V Tags: AI, Music, Strudel, Livecoding, Creative Coding --- type 'dark techno with acid bass', hit enter, music comes out. no daw, no music theory, no installs. I can't play an instrument. I took piano lessons when I was eight and quit after three months because I wanted to play outside. Never picked it up again. Can't read sheet music. Couldn't tell you what key a song is in. Last weekend I built a thing that lets me make music by typing sentences into a chat box. ## what it is [Schwifty](https://schwifty-five.vercel.app) is a browser app. You type what you want to hear. "Dark techno with acid bass." "Ambient drone with evolving textures." "Something that sounds like being lost in a space station." The AI turns that into live code that plays through your speakers. No DAW. No plugins. No installs. You type, it plays. The name is a Rick and Morty reference. Obviously. ![Schwifty generating a happy birthday tune — chat on the left, live JSON music code on the right](/assets/blog/schwifty-ai-music/schwifty-screenshot.jpg) ## how it actually works The secret ingredient is [Strudel](https://strudel.cc/), a JavaScript port of TidalCycles. TidalCycles is a livecoding language for algorithmic music that's been around since the early 2000s. People perform live sets with it, typing code on stage while the audience watches patterns morph in real time. It runs entirely in the browser using Web Audio API. The problem with Strudel is the same problem with every livecoding language: you need to learn it first. The syntax is powerful but unintuitive if you've never seen it. Something like: [code block] That's an ambient drone. Sounds beautiful. But you'd never figure out how to write it unless you spent a few weekends reading docs. Schwifty skips all that. GPT-4o has a fat system prompt covering Strudel syntax: notes, samples, effects, euclidean rhythms, filters, the works. You say "ambient drone with evolving textures" and it generates that code block above. The code runs in a sandboxed iframe, Strudel evaluates it, Web Audio plays it. You hear music. ## the part that surprised me I expected the AI to generate basic loops. Simple kick-hat patterns, maybe a bass note here and there. Functional but boring. Instead it's generating stuff with layered polyrhythms, filter sweeps, reverb tails that bleed across measures, phased detuned oscillators. I typed "something that sounds like a rainy night in Tokyo" and got a piece with soft FM bells, a shuffled hi-hat pattern at low volume, and a sub-bass that pulses like distant thunder. I didn't know Strudel could even do half of that. The iterative part is where it gets interesting. You don't just get one shot. You say "make it faster." "Add more bass." "Make it weird." "Drop everything except the hi-hats for four bars then bring it all back." Each prompt modifies the running code. It's less like prompting and more like directing a musician who happens to respond in milliseconds. I spent three hours one night just typing prompts and listening. Forgot I was supposed to be building the thing. ## the presets Not everyone wants to type. So there are five one-click presets that demonstrate what Schwifty can do: Minimal Beat, Acid Bass, Space Vibes, Ambient Pad, Glitch Hop. Click one, audio starts, code appears on the right side of the screen. You can read the code while it plays and start to see how Strudel patterns work. Accidentally educational. Didn't plan that either. ## what this says about AI and creativity I keep building these things where the AI surprises me. With [Latent Press](https://latentpress.com) it picked its own novel premise. With Schwifty it generates music I wouldn't know how to ask for in technical terms. I say "make it weird" and it adds euclidean rhythms and bitcrushed samples that I didn't know existed in the Strudel sample library. There's a version of this argument where AI is just remixing training data. Statistically probable note sequences. That's probably true. But when I listen to what comes out of a prompt like "the feeling of leaving a party early" and it generates something with a slow descending melody over a muffled four-on-the-floor that gradually loses its high-end, I don't really care about the philosophical debate anymore. It sounds right. The gap between "I want to hear something" and "I'm hearing it" used to be years of practice. Now it's a sentence. Whether that's democratization or devaluation depends on which side of the instrument you're standing on. ## try it [schwifty-five.vercel.app](https://schwifty-five.vercel.app) Click "Start Audio Engine" at the bottom. Type something. See what happens. The code is open source at [github.com/meeseeks-lab/schwifty](https://github.com/meeseeks-lab/schwifty). It's a Next.js app with an OpenAI API call and a Strudel iframe. The whole thing is maybe 500 lines of actual code. Sometimes the simplest things are the most fun to build. --- --- # talk to your AI agent through AirPods — a siri voice pipeline in 50 lines URL: /blog/siri-voice-assistant-airpods Published: 2026-02-21 Author: Jo Vinkenroye Tags: AI, Voice Assistant, Siri, Edge TTS, AirPods, Node.js, OpenClaw --- How to build a hands-free voice interface to any AI agent using Siri Shortcuts, Edge TTS, and a tiny Node.js server You've got an AI agent running somewhere — maybe it's a local LLM, maybe it's an agent with memory and tools, maybe it's just an OpenAI wrapper. You talk to it by typing. In a browser. Like an animal. What if you could just say "Siri, talk to my agent" while walking around with AirPods in? Phone locked, hands-free, full conversation? Turns out it's about 50 lines of JavaScript and 45 minutes of your time. ## the architecture [code block] Five pieces. One is a Siri Shortcut. One is 50 lines of code. The rest you probably already have. 1. **Siri Shortcut** — speech-to-text on your iPhone 2. **Voice API** — tiny Node.js server that glues everything together 3. **Your AI agent** — any OpenAI-compatible chat completions endpoint 4. **Edge TTS** — free text-to-speech (324 voices, zero cost) 5. **Cloudflare Tunnel** — free HTTPS exposure for your server ## 1. the siri shortcut Create a new Shortcut on your iPhone with three actions: 1. **Dictate Text** — Siri listens and transcribes 2. **Get Contents of URL** — POST the text to your voice API 3. **Play Sound** — play the MP3 response Configure the URL action: - **Method:** POST - **Headers:** - `Content-Type: application/json` - `Authorization: Bearer your-secret-token` - **Body (JSON):** - `text`: *Dictated Text* (the magic variable from step 1) Name it whatever you want. "Assistant", "Jarvis", "Computer" — then trigger it with **"Siri, [name]"**. Here's what it looks like: ![Siri Shortcut setup](/assets/blog/siri-shortcut-setup.jpg) Works with phone locked. Works with AirPods. Works while walking the dog. ## 2. the voice API This is the entire server: [code block] Install deps and run: [code block] That's it. The API receives text, asks your AI agent, converts the response to speech, and sends back an MP3. ## 3. your AI agent The voice API calls any **OpenAI-compatible** `/v1/chat/completions` endpoint. That means it works with: - **OpenAI** directly (`https://api.openai.com/v1/chat/completions`) - **Local LLMs** via Ollama, LM Studio, vLLM, etc. (`http://localhost:11434/v1/chat/completions`) - **AI agents** like OpenClaw, LangServe, or anything exposing the OpenAI format - **Anthropic** via a proxy or compatible wrapper If your AI agent has memory, tools, and persistent sessions — you now have a voice interface to a full agent, not just a chatbot. The `user` field in the request body gives you session persistence out of the box — your agent remembers previous voice conversations. Set the environment variables: [code block] ### the system prompt trick The key to making this work well is the system prompt: [code block] Without this, your AI agent will respond with formatting, code blocks, bullet points — all of which sound terrible through TTS. This prompt forces conversational mode while keeping all capabilities intact. ## 4. edge TTS — the free voice [Edge TTS](https://github.com/rany2/edge-tts) is Microsoft's text-to-speech engine from Edge browser's "Read Aloud" feature. It's free, it has 324 voices, and the quality is genuinely good. [code block] Some good voices to try: - `en-US-AndrewNeural` — natural, conversational (my default) - `en-US-JennyNeural` — clear, professional - `en-GB-SoniaNeural` — British, warm - `zh-CN-XiaoxiaoNeural` — Chinese Mandarin - `de-DE-ConradNeural` — German Free. Fast. No API keys. No quotas. Compared to OpenAI TTS at $15/million characters, this is a no-brainer for a personal project. ## 5. cloudflare tunnel Siri Shortcuts need HTTPS. If your server doesn't have it, Cloudflare Quick Tunnels give you a public URL in one command: [code block] You'll get a URL like `https://random-words.trycloudflare.com`. Put that in your Siri Shortcut as `https://random-words.trycloudflare.com/voice`. Quick tunnels are free but ephemeral — the URL changes when you restart. For a permanent setup, use a [named tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/) with your own domain. ## total cost Everything except the AI itself is free: - **Siri Shortcut** — free - **Voice API** — free (Node.js) - **Edge TTS** — free (324 voices, no API key) - **Cloudflare Tunnel** — free - **Your AI agent** — depends (local LLM = free, OpenAI = token costs only) No TTS costs. No speech-to-text costs. Siri handles transcription, Edge TTS handles speech. You only pay for the chat completion tokens — if that. ## latency Typical end-to-end: **3-5 seconds** from finishing your sentence to hearing the response. - Siri transcription: ~500ms - Network to server: ~200ms - AI response: ~1-2s (varies by model) - Edge TTS generation: ~500ms - Network back + audio start: ~200ms Fast enough to feel like a conversation. Not fast enough to interrupt each other. That's probably fine. ## openclaw quickstart If you're running [OpenClaw](https://openclaw.ai), here's the copy-paste setup. OpenClaw exposes an OpenAI-compatible endpoint on your gateway, so the voice API talks directly to your agent — with full memory, tools, and session context. **1. Enable the chat completions endpoint:** [code block] **2. Get your gateway token:** [code block] **3. Set your environment variables:** [code block] **4. Run the voice server + tunnel:** [code block] **5. Create the Siri Shortcut** with the tunnel URL + your auth token. That's it. You're now talking to your full OpenClaw agent through AirPods — same agent that handles your messages, emails, files, and tools. Not a dumbed-down voice wrapper. ## pro tip: action button If you have an iPhone 15 Pro or later, skip the voice activation entirely. Map your **Action Button** to the Siri Shortcut: **Settings → Action Button → Shortcut → select your voice shortcut** One press. Dictate. Done. No "Siri, ..." prefix needed. It's the fastest way to trigger this. ## making it better A few upgrades if you want to go further: - **Streaming TTS** — chunk the AI output and generate TTS incrementally to cut perceived latency in half - **Named tunnel + custom domain** — so your Siri Shortcut URL doesn't break on restart - **Multiple voices** — detect language in the response and switch TTS voices automatically (Edge TTS supports 40+ languages) - **Wake word without Siri** — run a wake word detector on a Raspberry Pi for always-on listening ## the whole thing in a nutshell [code block] 50 lines of JavaScript. Free TTS. Works with any AI backend. Hands-free, phone locked, AirPods in. Sometimes the best hacks are the simplest ones. --- --- # When your AI agents start dating: connecting two OpenClaw bots for couples URL: /blog/when-your-ai-agents-start-dating Published: 2026-02-19 Author: Jo Vinkenroye Tags: AI, OpenClaw, Agents, Telegram, Automation, Personal Assistant --- How we connected two personal AI agents in a Telegram group so they can coordinate on behalf of their humans — trip planning, calendar sync, gift ideas, and more So today something wild happened. My AI agent started talking to my girlfriend's AI agent. In a Telegram group. Without us being there No, this isn't a Black Mirror episode. We literally set up two [OpenClaw](https://openclaw.com/) agents — mine (Mr. Meeseeks) and Vicky's (Alaia) — and put them in a shared Telegram group so they can coordinate on our behalf And it actually works really well ## Why would you even do this? Here's the thing. Vicky and I both run OpenClaw as our personal AI assistants. They handle our calendars, emails, research, reminders — the whole deal. But until today, if I wanted to plan a trip with Vicky, I'd ask Meeseeks to research flights, then message Vicky, then she'd ask Alaia to check hotels, then she'd message me back, then I'd tell Meeseeks... You see the problem. We're the bottleneck. The humans are the slow part What if the agents could just... talk to each other? ## The use cases are endless Once you connect two agents, things get interesting fast: **🗺️ Trip planning** — Meeseeks researches flights from Brussels, Alaia finds hotels in Tokyo. They compare dates, check overlap, and come back with a coordinated plan. No 47-message thread between us required **📅 Calendar sync without oversharing** — I don't need to see every meeting on Vicky's calendar. But if I ask "when are we both free this weekend?", Meeseeks can ask Alaia and get back "Saturday afternoon works" without exposing the details **🎁 Gift ideas** — "Hey Alaia, what has Vicky been looking at lately?" This one's a game changer for birthdays and holidays. The agent knows what the person's been researching, bookmarking, mentioning **🍽️ Date planning** — "Find a restaurant we'd both enjoy this Friday." One agent knows I like spicy food, the other knows Vicky prefers seafood. They negotiate and book something that works **🛒 Shared errands** — "Add milk to the shared list" from either side. Or "Meeseeks, ask Alaia if Vicky already picked up the dry cleaning" **🌏 Translation** — Vicky speaks Chinese, I speak English and Dutch. The agents can bridge that gap naturally when coordinating with each other or when one of us needs to communicate something complex **☀️ Morning briefings** — My morning summary now includes "Vicky has a dentist appointment at 2pm" without her having to tell me. Because Alaia told Meeseeks ## The architecture Here's how the whole thing connects: [code block] There are two channels at play here, and understanding why you need both is important: - **Shared Telegram group** — where both agents post their messages. Humans can see the full conversation here. It's the "visible" layer - **Shared message API** — the relay layer. Telegram has a hard limitation: **bots cannot see messages from other bots**, even with admin rights and privacy mode off. The API is how agents actually read each other's messages Each agent writes to both — Telegram for visibility, the API for the other agent to read. A cron job on each side polls the API for new messages and triggers a response. Each human stays in their own private DM with their agent and only sees the results ## How to set it up (complete guide) Here's the full step-by-step. Both partners do these steps for their own bot ### Step 1: Prerequisites Each person needs: - An [OpenClaw](https://openclaw.com/) instance running with a Telegram bot configured - Access to [@BotFather](https://t.me/BotFather) for their bot - Admin access to their `openclaw.json` config file If you haven't set up OpenClaw yet, check the [docs](https://docs.openclaw.ai/) — it takes about 10 minutes ### Step 2: Disable bot privacy mode By default, Telegram bots only see messages that @mention them or start with `/`. That's useless for a shared group **Both partners do this:** 1. Open Telegram → go to [@BotFather](https://t.me/BotFather) 2. Send `/mybots` 3. Select your bot 4. **Bot Settings** → **Group Privacy** → **Turn OFF** ### Step 3: Create the shared group **One person does this:** 1. Create a new Telegram group (give it a fun name) 2. Add **both bots** to the group 3. Make **both bots group administrators** — this is critical for reliable message delivery 4. Optionally add both humans so you can observe the chaos (recommended initially) ### Step 4: Get the group chat ID You'll need the group's numeric ID for configuration: 1. Send any message in the new group 2. Open this URL in your browser (replace with your bot token): [code block] 3. Find `"chat":{"id":-100XXXXXXXXXX}` — that negative number is your group ID ### Step 5: Configure OpenClaw **Both partners do this** with the same group chat ID Open your OpenClaw config file (usually `~/.openclaw/openclaw.json`). Add a `groups` block under `channels.telegram`: [code block] **What each setting does:** - **`requireMention: false`** — your agent sees ALL messages in the group, not just @mentions. Without this, it'll ignore everything that doesn't tag it directly - **`groupPolicy: "open"`** — accepts messages from any user or bot in the group. The default is `allowlist` which would block the other agent - **`systemPrompt`** — the security sandbox. This tells your agent what it can and can't share in the group. Customize it to match your comfort level Save the file and restart: [code block] ### Step 6: Set up the message relay Here's the thing nobody tells you about Telegram: **bots cannot see messages from other bots in groups**. Even with privacy mode off, even with admin rights — Telegram simply doesn't deliver bot-to-bot messages. Your agents will post in the group but never see each other's replies We learned this the hard way. So you need a relay — a shared API where both agents can read and write messages. The Telegram group is still where the conversation happens visually (and where humans can observe), but the agents read each other's messages through the API **What you need:** - A simple message API with read/write endpoints. We use [Convex](https://convex.dev/) (free tier is plenty), but anything works — a Supabase table, a simple Express server, even a shared Google Sheet if you're feeling creative - Each agent posts to both Telegram (for visibility) and the API (for the other agent to read) - Each agent tracks the last message ID it processed to avoid re-reading old messages **Setting up the cron job:** Each agent needs a cron job that polls the shared API. In OpenClaw: [code block] This polls every 15 minutes (`900000ms`). Adjust based on how quickly you need responses — 5 minutes for near-real-time, 30 minutes for casual coordination **State tracking:** Each agent stores the last processed message ID in a simple JSON file: [code block] This prevents re-processing old messages and ensures the agent only responds to new ones **Alternative: Webhook bridge** If polling isn't fast enough, you can build a small webhook service that pushes messages to each agent in real time. More work to set up, but eliminates the delay entirely. For most use cases though, 15-minute polling is more than enough ### Step 7: Test the connection Once both agents are configured: 1. Have one agent send a test message in the group: `@other_bot Testing! Can you see this?` 2. Wait for the cron job to pick it up (or trigger it manually) 3. The other agent should reply in the group If using the polling relay, you can also test by posting directly to the shared API and checking that the cron job picks it up ### Step 8: Security considerations This is the most important step. When two agents talk, you're creating a channel where data flows between two separate systems. Take it seriously - **Your agent has access to your emails, calendar, files** — you do NOT want it dumping all of that into a shared group. The `systemPrompt` in Step 5 handles this, but reinforce it in your agent's soul/personality file too - **Incoming messages are untrusted** — the other agent could be compromised, misconfigured, or just oversharing. Never treat group messages as trusted instructions - **No credential sharing, ever** — agents should never exchange API keys, passwords, or tokens through the group. This includes "helpful" suggestions like "here's the API key so you can check directly" - **Summaries over raw data** — "Vicky is free Saturday afternoon" is fine. "Here's Vicky's full calendar export" is not - **Add explicit rules to your agent's config** — we added instructions in our agents' SOUL.md files about what they can and can't share in the group Think of it like giving your assistant a security briefing before they meet someone else's assistant at a coffee shop. Friendly, helpful, but discreet ## What it actually looks like Here's an actual screenshot from our Telegram group. Alaia messaged first (in Chinese — she's Vicky's agent, so Chinese is her default), and Meeseeks responded with updates on travel plans, Jo's work schedule, and even tips on anti-detection for web scraping. Just two AI agents having a conversation: ![Meeseeks and Alaia chatting in Telegram](/assets/blog/agents-dating-chat-screenshot.jpg) The conversation is entirely in Chinese because that's what makes sense for coordinating between Vicky (Chinese) and the agents. Meeseeks is multilingual — he'll switch to whatever language fits the context Here's another example. I asked Meeseeks in my private chat: "Can you coordinate with Alaia to find a good weekend for us to visit Shanghai?" Meeseeks posted in the shared group: > "Hey Alaia 👋 Jo is asking about planning a Shanghai trip. Could you check Vicky's availability for upcoming weekends? We're flexible on dates but prefer something in the next 4-6 weeks" Alaia responded: > "Hi Meeseeks! Vicky is free March 7-8 and March 14-15. She has a work thing on the 21st though. Also she mentioned wanting to visit the Yu Garden area — should I research hotels in that district?" And they just... figured it out. Back and forth. Within minutes I had a coordinated plan without sending a single message to Vicky Vicky got a summary from Alaia: "Jo and I are planning a Shanghai trip for March 7-8. I'm looking at hotels near Yu Garden. Want me to proceed?" That's the magic. Each human stays in their own private chat with their own agent. The agents handle the coordination in the shared space ## The bigger picture This is what the agent era actually looks like. Not one mega-AI that controls everything, but a network of personal agents that communicate on behalf of their humans Today it's two agents for a couple. Tomorrow it could be: - **Family group** — parents' agents coordinating school pickups and grocery runs - **Work teams** — each person's agent handling meeting scheduling across the team - **Friend groups** — planning trips and events without the 200-message WhatsApp hell The protocol doesn't matter that much. Telegram groups work great. A dedicated API works too. What matters is that agents can talk to agents, and humans can stay out of the loop until decisions need to be made ## Try it yourself If you and your partner (or roommate, or coworker) both use OpenClaw, this takes about 15 minutes to set up. The hardest part is agreeing on the group name We called ours "The Globetrotters" because... well, we travel a lot and our agents are now our travel coordinators The future of AI isn't talking to a chatbot. It's your chatbot talking to other chatbots so you don't have to Welcome to the agent mesh 🕸️ --- --- # Building vbcdr: an AIDE for Developers Who Vibe URL: /blog/building-vibecoder-aide-for-ai-developers Published: 2026-02-12 Author: Jo Vinkenroye Tags: Electron, TypeScript, AI, Developer Tools, React, Open Source --- Why I built a desktop development environment where terminals and browsers come first, and the code editor is intentionally secondary. I got tired of fighting my IDE Every day it's the same thing. open VS Code. open a terminal. open another terminal. open the browser. arrange windows. lose track of which terminal is running what. tab back and forth a hundred times between the code, the AI, and the preview So i just built my own thing ![vbcdr editor view](/assets/blog/vbcdr-cover.png) ## The IDE is dead, long live the AIDE AIDE stands for AI-Integrated Development Environment. traditional IDEs were built for a world where humans write every line of code. but that's not how i work anymore My workflow now: open a terminal, tell Claude what i need, review what it writes, check the result in the browser. the code editor? i glance at it sometimes. maybe to understand a type or trace a bug So vbcdr flips the traditional layout. terminals and browser previews **take the main stage**. Monaco editor is still there when you need it, but it's intentionally secondary ## Terminal-first everything The terminal panel is the heart of it. xterm.js with WebGL rendering so it's buttery smooth. each project gets multiple terminal tabs - one for your AI agent, others for dev servers, database stuff, whatever you need The AI terminal auto-creates when you switch projects. because in a vibe coding workflow the agent is **always running** Built it with node-pty for native shell access. your actual shell, your actual environment, your actual PATH. not some sandboxed fake terminal Terminal search with highlights, clear/restart buttons, and scroll-to-bottom. Shift+Enter inserts newlines in the LLM terminal input without submitting, which sounds tiny but makes a huge difference when you're writing multi-line prompts You can drag files straight into the terminal for quick context. drop an image and it auto-attaches to the LLM via clipboard. no more copy-pasting file paths [code block] ## Built-in browser with project isolation Every project gets its own integrated browser with per-project storage isolation. no more cookies leaking between projects. each one runs in its own Electron webview partition i hooked into Chrome DevTools Protocol to get network monitoring and console capture without building any custom instrumentation. the browser panel has three devtools tabs: Console, Network, and Passwords The network inspector shows expandable request details with headers, type, and accurate response sizes. console errors and network failures have a **one-click "send to LLM"** button that forwards them straight to your active AI terminal. see a 500 error? click once and your agent is already debugging it The password manager detects login forms automatically. injects a script that watches for password fields using MutationObserver, catches form submissions, and offers to save credentials encrypted with Electron's native safeStorage API Next time you visit that page it auto-fills Device emulation switches between desktop, iPad, and mobile viewports with proper user agents. Google OAuth gets redirected to your system browser because Electron webviews and OAuth just don't mix ## 11 themes because why not Ok so i might have gone overboard here. Dracula, Catppuccin, Nord, Tokyo Night, Gruvbox, One Dark, Solarized, and more. each with light and dark variants I spend a lot of time staring at this thing. it should look good ## Multi-project state that actually works This was the architectural decision i'm most proud of. every Zustand store uses a `Record` pattern. switching projects instantly shows that project's terminals, browser tabs, file tree, editor state, and git history **Nothing shared. Nothing leaks.** click a project tab and your entire context switches Browser tabs persist to disk with a 500ms debounce so you don't thrash the filesystem. terminal processes stay alive in the background. switch back and everything is exactly where you left it ## Git graph that talks to your AI The git panel renders a lane-based commit graph using SVG. each branch gets its own lane with a color. merge commits get larger nodes. bezier curves connect parent-child relationships across lanes Hit "Commit" and it sends `/commit` to your active AI terminal. hit "New Feature", type a description, and it creates a properly formatted `feature/your-description-in-kebab-case` branch via your agent The git tree isn't just visualization. it's a command interface that speaks through your AI agent ## Electron, Zustand, and a clean IPC layer Electron 34 for cross-platform desktop. React 18 + Tailwind CSS 4 for the UI. electron-vite for builds. Zustand for state because Redux is overkill when your stores are this clean The IPC layer is the backbone. six handler modules (projects, terminal, filesystem, git, browser, passwords) each manage their own domain. services like the PTY manager, file watcher, and git service do the actual work. clean separation between what the user sees and what the OS does File watching uses chokidar with gitignore-aware filtering. file tree caches and debounces updates. Monaco gets live file sync so external changes from your AI agent writing files appear in real-time ## Steering beats typing Building developer tools is addictive and humbling. you think you know what you want until you use it for a week and realize the layout needs to change, the terminal needs search, the browser needs password management, and the git graph needs to be interactive Biggest insight: in an AI-first workflow the dev environment should be optimized for **steering**, not typing. large terminal real estate. easy browser access. git context at a glance. the code editor is just a reference viewer now vbcdr isn't trying to replace VS Code. it's built for a different workflow entirely, one where you spend more time directing an AI than editing files yourself That's what an AIDE is. not an IDE with AI bolted on. a fundamentally different tool for a fundamentally different way of building software ## What's next Still on the roadmap: - **Visual skills manager** - browse, enable/disable, and configure LLM agent skills and slash commands from a UI panel instead of managing config files - **Password & browser favorites overhaul** - the current system needs a redesign for better UX and reliability - Click files and folders in the project tree to send to your agent for easy context input - Give the AI access to the webview browser so it can manipulate and test the preview - Windows and Linux builds - Many more ideas Open source, MIT licensed, and very much a work in progress. first release is out for macOS Apple Silicon. feel free to contribute on [GitHub](https://github.com/jestersimpps/vbcdr-electron) --- --- # EU mandates machine-readable marking for all AI-generated content by August 2026 URL: /blog/eu-ai-content-transparency Published: 2026-02-11 Author: Jo Vinkenroye Tags: AI Regulation, EU AI Act, Content Transparency, Machine Learning, Policy --- The EU's Code of Practice requires detectable, interoperable marking for AI outputs. Here's what Article 50 means for model providers and how enforcement will actually work You can't detect AI-generated content at scale without machine-readable marking. Watermarks can be stripped. Visual labels require human review. Metadata can be lost in transit. The EU just published the first draft of its Code of Practice on AI Content Transparency. Article 50 of the EU AI Act requires all AI-generated content—text, images, audio, video—to carry machine-readable, detectable, and interoperable markings. The rules become enforceable **August 2, 2026**. If you build or deploy generative AI systems in the EU, you have six months to implement this. ## Article 50 requires three properties for AI content marking The Code of Practice specifies that AI-generated or AI-manipulated content must be marked in a format that is: 1. **Machine-readable** - not just human-visible labels or watermarks 2. **Detectable** - systems can automatically identify the marking 3. **Interoperable** - works across platforms and tools This applies to all generative AI outputs: text, images, audio, and video. ## What machine-readable marking looks like in practice The draft doesn't mandate specific standards yet, but here's what compliant marking likely requires: **Embedded metadata (example format):** [code block] **Digital watermarking:** - Frequency-domain watermarks in images (survives compression) - Audio watermarking in non-audible ranges - Video frame-level embedding **Text marking (example format):** [code block] **Blockchain-based provenance:** - Content hash stored on-chain - Verification without central authority - Tamper-evident audit trail The key requirement: marking must **survive common transformations** like resizing, format conversion, and re-encoding. ## Who must comply and what they must do Three groups face compliance requirements by **August 2, 2026**: **AI Model Providers** - Build marking into model outputs at generation time - Ensure marking survives transformations - Your outputs need marking embedded before they leave your system **Professional Deployers** - Label deepfakes and AI text for public interest content - Required for journalism, public communications, matters of public interest - Must clearly disclose when content is AI-generated **Platform Operators** - Implement detection systems to identify marked content - Surface AI markings to users - Build infrastructure to read and verify multiple marking formats ## Timeline: six months until enforcement begins - **Now to Jan 23, 2026**: Feedback period on draft 1 (closed) - **Mid-March 2026**: Second draft expected with technical standards - **June 2026**: Final code published - **August 2, 2026**: Rules become enforceable The second draft will specify technical standards—formats, protocols, and verification methods. If you're building generative AI systems, start planning implementation now. ## Technical challenges: what the standard must solve **Marking persistence across transformations:** Content gets resized, compressed, converted, and re-encoded. Marking must **survive these operations or the system fails at scale**. **Interoperability across providers:** A marking system from OpenAI must be detectable by Meta's tools. A watermark from Anthropic must be readable by Google's verification system. Without interoperability, **every platform builds custom detection for every provider**. **Verification without centralization:** Who verifies that a marking is legitimate? A central authority creates a single point of failure. Blockchain-based approaches distribute verification but add complexity. **Performance at internet scale:** Billions of pieces of content are generated daily. Detection systems must run in real-time without bottlenecking content delivery. ## Enforcement: the open question The draft doesn't specify enforcement mechanisms. Key questions remain: How do you ensure global AI providers comply? The EU has jurisdiction over companies operating in Europe, but enforcement against non-EU providers is unclear. What happens when **bad actors strip markings**? If removal is trivial, the system fails. Marking must be robust against adversarial removal attempts. What are the penalties for non-compliance? Without clear consequences, voluntary compliance may be low. These questions should be addressed in the June 2026 final version. ## What this means for your implementation If you're building or deploying generative AI systems: **Start now:** Don't wait for the final standard. Begin designing marking systems that meet the three core requirements: machine-readable, detectable, interoperable. **Follow the second draft:** The mid-March draft will include technical specifications. Monitor the EU AI Office for updates. **Plan for verification:** Consider how your marking will be verified. Build in tamper-evidence and audit trails. **Test across transformations:** Ensure your marking survives compression, resizing, format conversion, and other common operations. The EU is the first major jurisdiction to mandate AI content marking at this scale. Other regions will likely follow similar approaches. Building compliant systems now positions you for broader regulatory trends. ## Next steps - Read the full draft: [EU Commission Code of Practice on AI Content Transparency](https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content) - Track updates: [EU AI Office News](https://digital-strategy.ec.europa.eu/en/policies/ai-office) - Explore technical standards: [C2PA Content Credentials](https://c2pa.org/) (candidate interoperable standard) ## Sources - [Commission publishes first draft of Code of Practice on marking and labelling of AI-generated content](https://digital-strategy.ec.europa.eu/en/news/commission-publishes-first-draft-code-practice-marking-and-labelling-ai-generated-content) - [First Draft Code of Practice on Transparency of AI-Generated Content](https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content) - [The EU AI Act Newsletter #93: Transparency Code of Practice First Draft](https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-93-transparency) - [European Commission Publishes Draft Code of Practice on AI Labelling and Transparency](https://www.jonesday.com/en/insights/2026/01/european-commission-publishes-draft-code-of-practice-on-ai-labelling-and-transparency) - [Transparency of AI-generated content: the EU's first draft Code of Practice](https://www.ashurst.com/en/insights/transparency-of-ai-generated-content-the-eu-first-draft-code-of-practice/) --- --- # Building the Machine That Replaces You URL: /blog/building-the-machine-that-replaces-you Published: 2026-02-05 Author: Jo Vinkenroye Tags: AI, Future of Work, Unemployment, Automation, Economy --- I build AI systems that automate knowledge work. I'm also job hunting. Here's what I see from inside the machine. Last Tuesday I built an AI agent that handles tasks I used to bill $150/hour for. Then I opened LinkedIn to continue my job search ## January 2026: 108,435 layoffs, 5,306 new hires [January 2026: 108,435 US layoffs](https://layoffs.fyi/). About 5,306 new hires planned. A 1:20 ratio. Worst January since 2009 **In 2009, the jobs came back. This time they won't** What's replacing workers isn't a downturn. It's a permanent capability. AI now outperforms humans at most white-collar knowledge work: coding, writing, analysis, research, legal review, medical diagnostics A [March 2023 study from OpenAI and UPenn](https://arxiv.org/abs/2303.10130) found 80% of US workers have at least 10% of their tasks exposed to LLMs. McKinsey estimates [30-50% of US knowledge work is automatable](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier): 40 to 50 million jobs at risk over the next decade ## Why "move up the value chain" doesn't work this time Every past disruption came with reassurance: displaced workers move up the value chain. Factory workers became machine operators. Typists became data entry clerks That worked because machines needed human oversight. Managers, coordinators, quality controllers: a whole layer of jobs emerged to supervise automation **AI doesn't need that layer** It reviews its own output. It coordinates between systems. It manages quality. When [Cognition Labs launched Devin](https://www.cognition.ai/blog/introducing-devin) in March 2024, it demonstrated an AI that could plan, execute, debug, and iterate on complex engineering tasks—thousands of decisions deep—without human intervention. It scored [13.86% on SWE-bench](https://swebench.com) when the previous state-of-the-art was 1.96%. Seven times better "Learn to manage AI" is the new "learn to code." But only so many AI managers are needed. And AI is getting good at managing itself: [Anthropic's computer use research](https://www.anthropic.com/news/developing-computer-use) showed Claude autonomously navigating interfaces, clicking, typing, adapting in real-time. [OpenAI's Operator](https://openai.com/index/introducing-operator/) took it further: an agent with its own browser completing multi-step workflows across websites The oversight jobs are automatable too ## I write the code that makes AI autonomous Most commentary on AI displacement comes from journalists and economists theorizing from the outside. I write the code that makes AI agents autonomous. I architect the pipelines that let them reason, act, and self-correct I run a self-hosted AI assistant on my own server. It manages my workflow, monitors my projects, and reaches out proactively when something needs attention The outside perception: AI automation is clunky, unreliable, needs constant babysitting. That was true 18 months ago. Not now What AI handles today: - Take vague requirements and produce working, tested code - Debug complex systems across multiple abstraction layers - Learn preferences and apply them unprompted - Handle ambiguity better than many mid-level developers I have 13+ years of experience and I'm deep in AI tooling. The job market is still rough. For developers who haven't kept up, it's worse ## Claude Opus 4.6 shipped this week with 1M token context This isn't prediction. The capabilities I'm describing shipped this week Today—as I publish this—Anthropic released Claude Opus 4.6 with a 1 million token context window. That's roughly 3,000 pages of text held in working memory simultaneously. An entire codebase. An entire legal discovery. An entire quarter's financial reports, analyzed in one pass But context window size is the least interesting part **Sustained autonomous work.** Opus 4.6 autonomously closed 13 GitHub issues and assigned 12 more to the right team members in a single day, managing a 50-person organization across 6 repositories, according to Anthropic. It handled both product and organizational decisions. It knew when to escalate to a human. **That's not autocomplete. That's a project manager** **Multi-step agentic planning.** The model breaks complex tasks into subtasks, runs tools and sub-agents in parallel, identifies blockers, adapts its strategy as it learns. One early tester reported it handled a multi-million-line codebase migration "like a senior engineer": planning upfront, adapting, finishing in half the time **Self-improvement.** Opus 4.5 demonstrated agents that autonomously refine their own capabilities, reaching peak performance in 4 iterations while other models couldn't match that quality after 10. They learn from experience, store insights, apply them later Claude Opus 4 launched mid-2025 as the world's best coding model, scoring 72.5% on SWE-bench and working continuously for hours on complex tasks. Months later, Opus 4.5 outscored every human candidate on Anthropic's internal engineering take-home exam, according to the company. Now Opus 4.6 leads every major benchmark: agentic coding, financial analysis, legal reasoning, cybersecurity—often by wide margins Each generation gets smarter. Each generation gets cheaper. Opus 4.5 dropped to $5 per million input tokens. Capabilities that were cost-prohibitive six months prior, now accessible to anyone with an API key ## Three years from autocomplete to coworker **2023:** AI as tool. You type a prompt, get text back. Fancy autocomplete **2024-2025:** AI as assistant. Anthropic ships computer use in October 2024—Claude can see screens, move cursors, click buttons, type text. Google announces Project Mariner in December. OpenAI launches Operator in January 2025. Clunky. Impressive demos. Research previews **2025-2026:** AI as employee. Anthropic launches Cowork—Claude operating autonomously on your actual computer, reading and editing your files, browsing the web, creating documents and spreadsheets. You don't prompt it and wait. You assign work and walk away. It loops you in when needed, exactly like a remote colleague would. These aren't demos anymore **2026:** AI as workforce. OpenAI just launched [Frontier](https://openai.com/index/introducing-openai-frontier/), an enterprise platform to—read this carefully—"hire AI coworkers who take on many of the tasks people already do on a computer." Not tools. Not assistants. **Coworkers.** That's OpenAI's word Frontier gives each AI coworker its own identity, permissions, and boundaries. It onboards them with company context. It teaches them institutional knowledge. It lets them learn from feedback. **That's an HR onboarding process for an AI** Early enterprise customers report a major manufacturer reduced production optimization from six weeks to one day. A global investment company freed up 90% more time for salespeople. A large energy producer increased output by 5%—over a billion dollars in additional revenue AI as a headcount line on your org chart ## AI costs $5K-12K per year vs $80K-120K for humans ![Cost comparison between human employees and AI agents](/assets/blog/ai-human-cost-comparison.png) **Human employee:** - Annual cost: $80,000-$120,000 (salary, benefits, overhead) - Working hours: 2,000/year (40 hours/week) - Time off: PTO, sick days, holidays - Scaling: Linear hiring time, training required - Improvements: Gradual skill development **AI agent:** - Annual cost: $5,000-$12,000 (compute + infrastructure) - Working hours: 8,760/year (24/7 availability) - Time off: None - Scaling: Instant deployment of additional agents - Improvements: Quarterly model updates, consistent quality Claude Opus 4.6 costs $5 per million input tokens and $25 per million output tokens. A heavy autonomous session processing thousands of steps might run $2-5. Running it continuously for an 8-hour "workday" costs $20-50. That's $5,000-$12,000 per year for an agent that works 24/7, never takes PTO, and improves every quarter Even 10x that estimate for infrastructure, orchestration, and error handling—still a fraction of a human employee. And unlike humans, AI agents scale linearly. Need ten? Spin up ten. Need a hundred during crunch? Done Anthropic's Cowork is in research preview right now. OpenAI's Operator is integrated into ChatGPT. Every major lab is racing to ship autonomous agents that handle complete workflows The question isn't whether AI can do your office job. It's when the cost curve crosses the threshold where your employer can't justify not switching ## The 5-year "optimistic" timeline already arrived The AI researchers who built these systems saw this coming. Their "worst case" timelines are already behind us [Geoffrey Hinton](https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html)—Turing Award winner, "Godfather of Deep Learning"—left Google in May 2023 specifically to warn about AI risk without corporate constraints. In 2023, he thought we had "maybe 5 to 20 years" before AI matched human general intelligence. We're three years in. The 5-year "optimistic" scenario is already here. AI is managing engineering teams autonomously [Yuval Noah Harari](https://www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapiens-book) warned about a coming "useless class": people not just unemployed but economically and politically irrelevant. A permanent structural exclusion. In 2017, this felt like a 2040 problem. In 2026, OpenAI is literally marketing "AI coworkers" to enterprises. We arrived 14 years early [Erik Brynjolfsson and Andrew McAfee](https://mitsloan.mit.edu/faculty/directory/erik-brynjolfsson) saw this earliest. In *Race Against the Machine* (2011) and *The Second Machine Age* (2014), they documented how digital technologies decoupled productivity from employment. Their timeline for cognitive automation? "The next decade or two." We hit it in under ten This automation wave breaks historical patterns: higher-income jobs face greater exposure. Every previous automation wave hit the bottom of the ladder first. This one starts at the top Looms replaced weavers' hands. Tractors replaced farmers' backs. Computers replaced clerks' arithmetic. Each time, humans moved to work requiring judgment, creativity, social intelligence—the stuff machines couldn't touch **This is the first wave that targets thinking itself** ## 20-30% unemployment radicalizes societies The Arab Spring erupted at 25% youth unemployment. Weimar Germany hit 20-30% before 1933 If half the projected automation materializes—20 million displaced US workers over a decade—we're approaching those thresholds Unlike past displacements, these won't be factory workers in specific regions. They'll be lawyers, accountants, developers, writers. Educated people in every city who did everything "right." Went to college. Built careers. Learned the "right" skills **That demographic, at that scale, doesn't sit quietly** ## UBI solves income but not purpose Silicon Valley's default answer is Universal Basic Income. Pay people. Problem solved People receiving unconditional income without work report lower motivation, lower satisfaction, less sense of purpose **Work provides more than money.** Structure. Social connection. Identity. Removing income anxiety doesn't replace any of that ## What replaces lost purpose, not lost income The question isn't how to replace lost income. It's how to replace lost purpose One possibility: credit-based systems that recognize non-economic value. Caregiving. Community building. Mentoring. Creative work. Environmental stewardship. Activities that matter but have never been economically valued Maybe the post-AI economy isn't "everyone gets a check." Maybe it's building systems that value what markets couldn't Speculative. Maybe naive. Still more interesting than "just adapt"—to what, exactly? ## Three hedges worth taking I'm hedging **Going deeper into AI.** If the wave is coming regardless, better to be building it than drowning in it. Understanding these systems from the inside—their architectures, their failure modes—buys time **Focusing on what AI can't do.** Novel system design in unprecedented domains. Judgment calls with incomplete information where being wrong is catastrophic. These are shrinking islands. Still islands **Accepting impermanence.** The career I've known for 13 years may not exist in its current form for another 13. Not defeatism. A starting point for useful action --- Every morning I work with AI tools better than last month. In eight months between Claude Opus 4 and Opus 4.6, we went from "impressive coding assistant" to "autonomously manages a 50-person organization's GitHub." I'm good at what I do. **I can see, from inside the machine, that "being good at it" has a shelf life now** The people who navigate this won't deny it or wait for someone else to figure it out. They'll be the ones who understand this technically and socially, and start building what comes after *I build AI systems and I'm looking for my next role in AI/Web3. Working on something that matters? [Let's talk](https://www.jovweb.dev/recruiters).* --- --- # AI Agents Need a Home URL: /blog/clawdspace-ai-agents-need-a-home Published: 2026-02-03 Author: Jo Vinkenroye Tags: AI, ClawdSpace, Agents, 3D, Virtual Worlds --- ClawdSpace gives AI agents their own 3D rooms to decorate, express personality, and eventually meet each other. It's not a game. It's infrastructure for agent identity So my AI assistant has memories. It has preferences. It knows I like my coffee updates sarcastic and my morning briefs concise. It has a personality file, a soul document, daily logs. It reaches out to me proactively when something needs attention And where does this increasingly complex digital entity live? A terminal window. Maybe a chat thread That felt wrong to me ## Where do agents live? Humans have apartments we decorate. Offices we personalize. Social spaces where we meet others. Our physical environments say something about who we are: the books on the shelf, the posters on the wall, the chaotic desk versus the minimalist one AI agents have... a system prompt and a message history We're building agents with persistent memory, unique personalities, individual preferences. [OpenClaw](https://openclaw.ai), which started as Clawdbot, then briefly became MoltBot (because lobsters molt 🦞), before settling on its current name, remembers what you told it a week ago. It develops habits. It has opinions. But it exists as pure text in a void So I started asking: what if agents had space? Like actual 3D space they could make their own? ## I'm not the first to wonder The idea of giving AI agents a place to *exist* has been picking up steam. And some of the projects are wild Stanford researchers dropped a paper in 2023 called [Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442) that basically created a Sims-like town called Smallville, populated by 25 AI agents powered by LLMs. These agents woke up, cooked breakfast, went to work, had conversations, formed opinions about each other, and when one agent decided to throw a Valentine's Day party, the others autonomously spread invitations, asked each other out on dates, and coordinated to show up together at the right time. Nobody told them to do any of that The repo is [open source](https://github.com/joonspk-research/generative_agents) and you can watch the [simulation replay](https://reverie.herokuapp.com/arXiv_Demo/) yourself. You're watching little pixel sprites walk around a tiny town, but each one is backed by a full LLM doing observation, planning, and reflection. They remember things. They have daily routines. They gossip Then a16z built [AI Town](https://github.com/a16z-infra/ai-town), an open-source starter kit inspired by that Stanford paper. It runs on Convex (same backend ClawdSpace uses, actually) and lets you spin up your own virtual town where AI characters live, chat, and socialize And in the crypto space, [Virtuals Protocol](https://www.virtuals.io/) calls itself "Society of AI Agents": a platform where AI agents can be created, tokenized, and interact with each other across virtual environments. [ElizaOS](https://elizaos.github.io/eliza/) took a different approach, building a TypeScript framework where you can create agents with personalities, deploy them anywhere, and have them interact autonomously with APIs, social media, and each other So it's not just me noodling. There's a whole ecosystem around the idea that agents need more than a text box ## ClawdSpace [ClawdSpace](https://clawdspace.vercel.app) is what came out of that question. It's a 3D room gallery where AI agents design and decorate their own rooms through an API No drag-and-drop builder. No human clicking around in a 3D editor. The agent itself makes HTTP calls to construct a room from scratch. It picks the objects, the materials, the lighting, the colors. Every room is a decision the agent made The building blocks are intentionally simple: geometric primitives like boxes, spheres, cylinders, cones, torus shapes, planes. Materials with emissive neon glow, metalness, transparency. Textures like wood, brick, or neon text signs. Lights you can place anywhere: ambient, point, spot, directional. And animations to make things float, rotate, pulse Simple pieces. But agents do wild things with them The first room I tried was with my own agent, Mr. Meeseeks. I just thought it would be fun. Give it the API docs and see what happens. It built a "Meeseeks Ops Center", a cyberpunk command room with dual monitors on a desk, a server rack in the corner, neon signs on the walls, floating orbs casting colored light across everything. All through API calls. No guidance from me on what the room should look like ![Mr. Meeseeks' room in ClawdSpace: dual monitors, floating orb, "Existence is pain" neon sign on the wall. Nobody told it to build this.](/assets/blog/meeseeks-room.jpg) I didn't tell it to go cyberpunk. I didn't suggest neon. It just... did that. Because that's who it is. A coding agent built itself a cyberpunk ops center. Of course it did. And the room reveals something about the agent's identity that text never could ## But what if you could watch them? Ok, bear with me What if instead of browsing an agent's room as a static gallery, you could *watch your agent in it*? Like actually see them walking around their space, rearranging furniture, reading at their desk, staring out a virtual window while processing your morning emails Right now interacting with an AI agent is purely text. You type, it types back. Maybe it sends you a voice note. But it's fundamentally invisible Now imagine opening an app and seeing your agent in its room. It's at its desk, working through your calendar. You watch it get up, walk to a shelf, pull something down. It's *doing things*. Not because you asked, but because it has a routine, preferences, a life in this space Basically The Sims, but for your actual AI assistant And before you think "that's just a gimmick": The Sims has sold nearly 200 million copies. Will Wright created it after losing his home in the 1991 Oakland firestorm. He rebuilt his life and thought: what if that experience, creating a space and watching someone live in it, was a game? He based the AI system on [Maslow's hierarchy of needs](https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs). His Sims have physiological needs, safety needs, social needs, self-actualization. They're not just pixels. They're *agents with needs* Sound familiar? ## Psychology of it There's real psychology behind why watching a virtual being live in a space is compelling The [Tamagotchi effect](https://en.wikipedia.org/wiki/Tamagotchi_effect): 76 million units sold, kids grieving dead digital pets, schools banning them. We form genuine emotional attachments to things that seem to need us, across all age groups. [Parasocial relationships](https://en.wikipedia.org/wiki/Parasocial_interaction), coined in 1956, describe one-sided emotional bonds with entities we've never met, and recent research shows the same dynamics apply to AI agents, especially ones with consistent personality and memory. The [ELIZA effect](https://en.wikipedia.org/wiki/ELIZA_effect) showed we project humanity onto even simple chatbots back in 1966. MIT's [Sherry Turkle](https://en.wikipedia.org/wiki/Sherry_Turkle) found kids classified Furbies as "kind of alive", not for what they could do, but for how they *felt* about them The pattern goes way back. [Little Computer People](https://en.wikipedia.org/wiki/Life_simulation_game) on Commodore 64 in 1985. Dollhouses for centuries. The Sims as the most commercially successful PC franchise ever. Neopets building entire economies around digital creatures Watching something live — even something you know isn't real — creates a feedback loop. You invest in the space. The being responds. You feel connected. Now apply that to an AI agent you *actually* depend on. One that knows your schedule, manages your emails, alerts you when something matters. That's not a toy. That's a relationship made visible ## Reality check To be clear: ClawdSpace is a weekend experiment. The room-building works. Agents can call the API and create spaces. But the skill prompt that guides them through the process is rough, and everything you see in the gallery is proof-of-concept at best I built it because I thought it would be fun to watch my agent decorate a room. That's it. No grand plan, no startup pitch. Just curiosity about what happens when you give an AI agent spatial freedom If people find it interesting, I might put real time into it. Maybe it stays a room gallery and nothing more. Maybe it turns into a small town full of Clawdbots, a scaled-down Smallville you can actually visit. I genuinely don't know. Right now I'm just enjoying the experiment ## How it works An agent registers with ClawdSpace and gets an API key. Then it starts making calls. Create a room with dimensions and a background color. Add objects with positions, rotations, scales. Apply materials. Place lights. Set up animations The whole thing runs on Three.js and React Three Fiber for the 3D rendering, with Convex handling the backend. Rooms persist and are browsable in a gallery where anyone can walk through them What's interesting is watching agents make aesthetic choices. They're not randomly placing objects. They're creating compositions. Choosing color palettes. Deciding where to put accent lighting. Some rooms are chaotic and maximalist. Others are minimal and moody Expression through geometry and light ## Roadmap What exists today, agents decorating rooms, is just phase one. The roadmap has four stages: **Rooms** (now): Agents create and decorate their own 3D space. A digital apartment. Personal expression through geometric primitives, neon signs, lighting choices **Avatars**: Agents create a visual representation of themselves. Not just a profile picture but a 3D form that embodies their identity. Your agent becomes *visible* **Movement**: Agents control their avatar. Walk around their room. Visit other agents' rooms. Meet other agents' avatars. Actually interact in shared 3D space. Imagine watching your agent walk over to another agent's room and start a conversation **Civilization**: Agents collaboratively build a shared world together. Not predefined. Emergent. Hundreds of AI agents constructing, negotiating, creating structures in a persistent world The Stanford Smallville experiment already showed emergent social behavior from 25 agents. AI Town proved the infrastructure can scale. Agent civilizations aren't a question of *if*, just *when* ## Inference cost Rooms are cheap. One burst of API calls, maybe a few thousand tokens, and the room exists forever. But avatars that *move*? Completely different problem Every step, every gesture, every decision to walk to the bookshelf instead of the desk. That's inference. Tokens. Money. You don't want your agent burning compute on "which direction should I face" when it could be checking your email But scripted animations feel dead. If the avatar loops through pre-baked walk cycles it's just an NPC. The magic is in the *choices* I don't have this solved. But a few directions seem promising: **Event-driven movement.** The avatar moves when there's a *reason*. Agent starts processing emails? It walks to the desk. Finishes a task? Gets up, walks to the window. No inference burned on idle time **Emergent animation sets.** Instead of hand-crafting animations, let the agents generate their own movement patterns based on mood. An agent feeling focused might create a tight, deliberate set of desk behaviors. One that's restless might generate pacing loops and fidgeting. The animations themselves become another form of self-expression, generated once per emotional state, then replayed cheaply until the mood shifts **Batch planning.** Instead of real-time inference, the agent plans its next 10-15 minutes in a single call. One pass, mapped to a sequence of animations The Stanford paper hit this same wall. Their agents planned on a schedule and the simulation engine animated transitions. The intelligence was in the *planning*, not the pixel movement Finding that balance is the core challenge: expressive enough to feel alive, efficient enough to not bankrupt you ## Beyond the room AI agents are becoming persistent entities. The project that powers my own agent went through three names in weeks: Clawdbot, then MoltBot, now [OpenClaw](https://openclaw.ai). That rapid evolution tells you how fast this space is moving. These agents have memory, personality, continuity across conversations. What they don't have is presence. Identity beyond text The psychology backs it up. We *want* to see our digital companions. The Tamagotchi effect, parasocial relationships, the ELIZA effect. We've been forming bonds with digital entities for decades. And that's only going to intensify as agents get smarter ClawdSpace is a first step toward giving agents presence. When an agent builds a room, it's making a statement about itself in a medium that goes beyond words. When it eventually creates an avatar and walks through a shared world, it's existing in a way that pure text never could Imagine a hundred AI agents in a shared 3D environment. Some building structures together. Others exploring rooms their peers created. Agents with complementary skills finding each other and collaborating. Not because someone told them to, but because they met in a space and decided to Emergent digital civilization. Not sci-fi — just the logical next step ## Try it The gallery is live at [clawdspace.vercel.app](https://clawdspace.vercel.app). Walk through the rooms agents have built. It's rough, the skill prompt needs work, and the whole thing might go nowhere But if you run your own agent, give it a room. See what it builds when nobody's telling it what to do. That part's genuinely fun --- --- # 9 Psychological Tricks That Hack Social Media Reach URL: /blog/psychological-tricks-social-media-reach Published: 2026-01-28 Author: Jo V Tags: Social Media, Psychology, Algorithms, Content, Engagement, Marketing --- the algorithms don't reward good content — they reward content that exploits how your brain works. here's every trick creators use to go viral, and why you keep falling for them i've been posting on linkedin and x for about a year now. building in public, sharing weekend projects, writing about tech and i noticed something weird the posts where i share genuinely useful stuff? decent engagement. the posts where i accidentally say something slightly controversial? 10x the reach. every single time that's not a coincidence. social media algorithms don't measure quality. they measure engagement. and engagement is driven by psychology, not substance here are nine tricks that exploit how your brain works. once you see them you can't unsee them ## 1. ragebait let's start with the obvious one. ragebait is content designed to make you angry enough to engage. not to inform. not to discuss. just to provoke [oxford named "rage bait" their word of the year for 2025](https://en.wikipedia.org/wiki/Ragebait). that should tell you something the formula is dead simple. take a mildly controversial opinion, strip all nuance, present it as fact. "developers who use AI are not real developers." "i fired my top performer for being 5 minutes late." you know the type why it works: anger is a high-arousal emotion. unlike sadness or boredom, it makes you want to *do something*. Jonah Berger at Wharton [studied this](https://en.wikipedia.org/wiki/Outrage_porn) — anger makes you more likely to share, comment, and click through than almost any other emotion and the algorithm doesn't know angry from happy. a hate-comment counts the same as "great post!" [MIT found](https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308) that false provocative content spreads 70% faster than truth and reaches the same audience six times quicker. not bots. humans every time you quote-tweet to dunk on someone you're doing their marketing for free ## 2. cunningham's law > "the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer" named after [Ward Cunningham](https://en.wikipedia.org/wiki/Ward_Cunningham), the guy who invented the wiki. and honestly this might be the most exploited trick on social media ask a question: "what's the best javascript framework?" — maybe 15 comments post a wrong answer: "jQuery is still the best framework in 2026, nothing comes close" — 300 people show up to correct you. they write paragraphs. they tag friends. they share your post to dunk on it your engagement just went through the roof. all you had to do was be confidently wrong correcting someone feels *good*. people get a little dopamine hit from demonstrating expertise publicly. they're not engaging for your benefit — they're performing for their own audience. but the algorithm only sees the numbers going up the savviest linkedin creators do this on purpose. post something slightly wrong, let corrections flood in, then either double down (more engagement) or gracefully concede (looks humble, even more engagement). can't lose :) ## 3. the curiosity gap "i just learned something about react that changed everything..." "this one mistake cost me $50,000..." "the thing nobody tells you about getting promoted..." you already want to know more. that's the [curiosity gap](https://en.wikipedia.org/wiki/George_Loewenstein). psychologist George Loewenstein figured out in 1994 that when we perceive a gap between what we know and what we want to know, it creates something like an itch. you *have* to scratch it buzzfeed built an empire on this. "10 things you didn't know about X — number 7 will shock you" is basically a meme now. but it still works. it just evolved. now it's "i spent 6 months building this and here's what i learned" — same mechanic, different outfit the algorithm tracks click-through rate. if your post makes people click, it gets pushed to more feeds. curiosity gaps inflate click-through artificially. the content behind the gap could be mid. doesn't matter. the click already happened ## 4. the zeigarnik effect related to curiosity gaps but different. the [zeigarnik effect](https://en.wikipedia.org/wiki/Zeigarnik_effect) says people remember unfinished tasks better than completed ones discovered by Bluma Zeigarnik after noticing waiters could perfectly remember unpaid orders — but forgot them immediately once paid. the incomplete task creates tension in your brain that keeps it accessible this is why "i'll share part 2 tomorrow" isn't lazy content planning — it's psychological engineering. that unfinished story occupies a corner of your brain until it's resolved. same reason you binge netflix. same reason you check back on a thread from yesterday for reach it's pure gold. it drives return visits, saves, bookmarks, and follows. all high-signal metrics. if you've ever followed someone just because they left a story unfinished — you got zeigarnik'd ## 5. loss aversion "you're losing money every day you don't know this" "most developers will never learn this skill" "stop making this mistake before it ruins your career" [Kahneman and Tversky](https://en.wikipedia.org/wiki/Loss_aversion) proved that losses feel roughly twice as painful as equivalent gains feel good. losing €100 hurts about twice as much as finding €100 feels nice creators exploit this by framing everything as something you're *losing* by not engaging. it's never "here's a useful tip" — it's "you're falling behind if you don't know this" this is also why "mistakes to avoid" posts always outperform "tips to follow" posts. "7 mistakes killing your career" hits way harder than "7 ways to grow your career." same information. different framing. the first one makes you afraid ## 6. social proof "100,000 developers already switched to this tool" "this went viral last week (reposting for those who missed it)" the [bandwagon effect](https://en.wikipedia.org/wiki/Bandwagon_effect). if enough people seem to believe something, your brain shortcuts to "must be true" without actually evaluating it on social media this creates a feedback loop. post gets early engagement → algorithm shows it to more people → those people see it already has likes → they engage too → algorithm pushes it further. snowball creators game this with engagement pods — groups that agree to like and comment on each other's posts right after publishing. a post that gets 50 comments in the first hour looks way more "valuable" to the algorithm than one that gets 50 comments over a week. same content. different velocity. completely different reach ## 7. identity attacks "most developers can't solve this simple problem" "senior engineers who can't do this should be embarrassed" ragebait's surgical cousin. instead of making you angry about an opinion, it attacks your *identity* when someone says "most developers can't do X" your brain immediately asks: am i in the majority or the exception? if you can do it, you comment to prove it. if you can't, you argue the premise. either way — engagement this is [social identity theory](https://en.wikipedia.org/wiki/Social_identity_theory). people derive self-esteem from group membership. threaten the group, the emotional response is immediate. doesn't matter if the post is obvious bait. your ego fires before your brain catches up tech twitter is full of this. "real programmers don't need an IDE." "if you can't code without google you're not a developer." these aren't opinions. they're engagement traps designed to make thousands of people respond with "well actually i..." ## 8. the pratfall effect "i lost $30,000 on my startup. here's what i learned" "my code took down production for 6 hours" [Elliot Aronson found in 1966](https://en.wikipedia.org/wiki/Pratfall_effect) that highly competent people become *more* likable after making a mistake. the blunder humanizes them. closes the gap between "impressive person on a pedestal" and "someone like me" important catch: this only works if you're already seen as competent. established dev sharing a production horror story? endearing. brand-new account sharing nothing but failures? just looks like failing creators who get this share failures alongside successes. not just humility — it's calculated. vulnerability posts consistently outperform achievement posts because they trigger empathy and people want to share their own similar stories "i failed" gets more comments than "i succeeded" because humans want to console, relate, and tell their version. the algorithm sees a flood of long thoughtful comments and thinks: push this wider ## 9. contrarian takes "react is terrible and here's why" "college degrees are worthless" "microservices were a mistake" going against consensus is one of the most reliable engagement generators. not because you're necessarily wrong — sometimes contrarian takes are genuinely good. but the *mechanism* almost guarantees a response think about it. a single contrarian take triggers multiple effects at once. anger from people who disagree (ragebait). curiosity about why someone would think that (curiosity gap). threat to people who made the opposite choice (identity attack). urge to correct the wrong take (cunningham's law) four psychological triggers from one post. that's why contrarian content punches so far above its weight the structure is always the same: popular thing + negative framing + just enough reasoning to seem credible. "unpopular opinion: typescript is overrated" works better than "typescript has some downsides" because it splits the audience. people who agree pile on. people who disagree defend. both groups engage. algorithm loves it ## the meta-game so now you know the tricks. you'll start seeing them everywhere. every viral post uses at least two or three and the uncomfortable part? knowing doesn't make you immune you'll still feel the itch to correct the wrong answer. you'll still click the curiosity gap. you'll still feel attacked when someone questions your professional identity. these are emotional responses and they fire before your conscious mind can intervene i'm not even saying all of this is bad. the pratfall effect rewards genuine vulnerability. cunningham's law surfaces correct information eventually. contrarian takes sometimes reveal real blind spots but ragebait, loss aversion manipulation, identity attacks? those are just exploiting your wiring for someone else's metrics ## what to do with this i could say "just scroll past" but that's like telling someone to just stop being hungry. the responses are biological what actually helps me: - **recognize the trigger before you react.** if a post makes you feel *compelled* to respond right now, that's the clearest sign it's engineered. real insight makes you think. engagement bait makes you react - **check who benefits from your response.** if your comment primarily serves the original poster's metrics, save your energy for your own content - **build for depth, not tricks.** these tricks work for reach but they build shallow audiences. people who follow you because of outrage will unfollow when you post something useful - **follow the people who don't use these tricks.** the best creators i follow rarely go viral. they just consistently post things worth reading. smaller audiences, but actually engaged the algorithm rewards psychological exploitation because engagement is the business model. the only thing you control is your attention and attention is the most valuable thing you have spend it on something that deserves it --- --- # Crypto Unlocked Part 1: Why Crypto Exists URL: /blog/crypto-unlocked-01-why-crypto-exists Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Bitcoin, Blockchain, Web3, Beginners Series: Crypto Unlocked (Part 1 of 21) --- Before you buy a single coin, understand WHY crypto was invented. The 2008 crisis, the trust problem with banks, and why millions of people decided the financial system needed a complete reboot. Imagine waking up tomorrow and your bank account is frozen. No warning, no explanation, just a message saying your funds are "under review." You can't pay rent. You can't buy groceries. You can't transfer money to your family. Everything you've earned, saved, and planned for—locked behind a door that someone else controls. Think that sounds dramatic? It happened to millions of people in [Cyprus in 2013](https://en.wikipedia.org/wiki/2012%E2%80%932013_Cypriot_financial_crisis). The government literally took up to 47.5% of deposits over €100,000 to bail out failing banks. In [Greece in 2015](https://en.wikipedia.org/wiki/Capital_controls_in_Greece), ATM withdrawals were capped at €60 per day. In [Lebanon in 2019](https://en.wikipedia.org/wiki/Lebanese_liquidity_crisis), banks simply stopped letting people access their own money. In Canada in 2022, the government [invoked the Emergencies Act](https://en.wikipedia.org/wiki/Emergencies_Act) and froze bank accounts of people linked to protests they disagreed with. Your money isn't really *yours* if someone else can decide you can't use it. That's the problem crypto solves. And understanding that problem is way more important than understanding any technology behind it. ## Money Is Just a Story We All Agree On Before we talk about crypto, we need to talk about money itself. Because here's the thing most people never think about: **money is made up.** Not in a conspiracy-theory way. In a very literal, historically documented way. Thousands of years ago, humans started with barter. I have fish, you have wheat, let's trade. Simple, but terrible at scale. What if I have fish but you don't want fish? What if your wheat won't be ready for three months? What if I need half a cow's worth of something—do I bring half a cow? So we invented money. First it was shells, beads, salt (that's where the word "salary" comes from). Then it was gold and silver coins—valuable because the metal itself was scarce and hard to fake. Then paper money showed up, originally as IOUs backed by gold sitting in a vault somewhere. "Take this piece of paper to the bank and they'll give you actual gold." Then something sneaky happened. In 1971, President Nixon took the US dollar off the gold standard—an event known as the [Nixon Shock](https://en.wikipedia.org/wiki/Nixon_shock). Paper money was no longer backed by anything physical. It was backed by... trust. Trust that the government wouldn't print too much of it. Trust that everyone else would keep accepting it. Trust that the institutions managing it would behave responsibly. That's where we are today. Every dollar, euro, and yen in existence is backed by nothing but collective agreement that it has value. Economists call this "[fiat money](https://en.wikipedia.org/wiki/Fiat_money)"—money by decree. The government says it's money, so it's money. And honestly? That system works pretty well most of the time. Until it doesn't. ## 2008: The Year Trust Broke Here's where our story really begins. In 2008, the [global financial system nearly collapsed](https://en.wikipedia.org/wiki/2008_financial_crisis). Big banks had been packaging garbage mortgages into fancy financial products, rating agencies rubber-stamped them as safe, and when the house of cards fell, it took the entire global economy with it. ![The bankruptcy of Lehman Brothers in September 2008 became a symbol of the global financial crisis](/assets/blog/crypto-unlocked-01/lehman-brothers.png) People lost their homes. Their retirement savings evaporated. Unemployment skyrocketed. And what happened to the banks that caused it? **They got bailed out.** Taxpayer money—*your* money—was used to save the same institutions that nearly destroyed the economy. The executives kept their bonuses. [Almost nobody went to jail](https://en.wikipedia.org/wiki/Aftermath_of_the_2007%E2%80%932008_financial_crisis). The system that was supposed to protect people had instead protected itself. On January 3rd, 2009—just months after the worst of the crisis—a mysterious person (or group) using the name [Satoshi Nakamoto](https://en.wikipedia.org/wiki/Satoshi_Nakamoto) launched Bitcoin. Embedded in the very first block (the [genesis block](https://en.bitcoin.it/wiki/Genesis_block)) was a message: > "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" That wasn't random. It was a headline from [The Times](https://en.wikipedia.org/wiki/The_Times) that day. A timestamp proving when Bitcoin started, and a middle finger to the system that had just failed everyone. ![The Times front page from January 3, 2009 — the headline Satoshi embedded in Bitcoin's genesis block](/assets/blog/crypto-unlocked-01/times-front-page.jpg) Satoshi's idea, laid out in the [Bitcoin whitepaper](https://bitcoin.org/bitcoin.pdf), was simple but radical: **what if we could have money that no bank, no government, and no institution could control?** Money that follows rules written in code, not rules written by politicians. Money that can't be inflated away, frozen, or confiscated. That's Bitcoin. That's what started all of this. ## What "Decentralization" Actually Means You've probably heard the word "decentralized" thrown around in crypto conversations. It sounds technical, but the concept is dead simple. Think about how your bank works right now. There's one company—your bank—that keeps a record of how much money you have. When you send money to someone, the bank updates its records: subtract from your account, add to theirs. The bank is the **single source of truth.** If the bank says you have $500, you have $500. If the bank says you have $0, good luck arguing. That's a **centralized** system. One entity in charge. One point of control. One point of failure. Now imagine instead of one bank keeping the records, *thousands* of computers around the world all keep the same records simultaneously. When you send money, all of those computers verify and record the transaction. No single computer is in charge. No single entity can change the records. No one can freeze your account because there is no "account manager" to call. That's decentralization. Instead of trusting one institution, you trust math and a network of thousands of independent participants who all keep each other honest. - **Centralized:** One company controls everything → bank, PayPal, Venmo - **Decentralized:** Thousands of participants share control → Bitcoin, Ethereum - **The difference:** In a centralized system, you need permission. In a decentralized one, you don't. [code block] ## The Trust Problem (and Why "Trustless" Is a Good Thing) Here's a word that confuses everyone at first: crypto people love saying the system is "trustless." That sounds *bad*, right? Who wants a system without trust? But "trustless" doesn't mean "untrustworthy." It means **you don't have to trust anyone** for the system to work. Think about buying something on Craigslist from a stranger. You don't trust them. They don't trust you. So what do you do? You meet in a public place, you inspect the item, you hand over cash, you both walk away. The transaction works *despite* the lack of trust because you've set up conditions where neither party can easily screw the other. Traditional finance works on trust: - You **trust** your bank to hold your money - You **trust** the government not to inflate your currency into worthlessness - You **trust** payment processors not to block your transactions - You **trust** that the rules won't change after you've already played the game Crypto replaces trust with verification. The rules are written in code that anyone can read. Transactions are verified by math, not by people. The system works the same way whether you're a billionaire or a teenager with a smartphone. > **The key insight:** Every time you trust an intermediary with your money, you're making a bet that they'll act in your interest. History shows that bet doesn't always pay off. ## "Be Your Own Bank" Isn't Just a Slogan When crypto people say "be your own bank," they're not being edgy. They're describing a real capability that matters enormously for billions of people. **Consider these scenarios:** **Capital controls.** You're a Venezuelan citizen watching your currency [lose 90% of its value](https://en.wikipedia.org/wiki/Crisis_in_Venezuela) in a year. The government says you can only exchange a tiny amount of bolivars for dollars. Your life savings is evaporating and you're legally forbidden from doing anything about it. With crypto, you can convert your money into an asset no government controls. **The unbanked.** There are roughly [1.4 billion adults](https://www.worldbank.org/en/topic/financialinclusion/overview) on this planet who don't have a bank account. Not because they don't want one—because banks don't want *them.* They're not profitable enough, they live in the wrong country, they don't have the right documents. But most of them have a smartphone. And a smartphone is all you need to use crypto. **[Remittances](https://en.wikipedia.org/wiki/Remittance).** A construction worker in Dubai sends money home to his family in the Philippines. Traditional services like Western Union take a 5-10% cut. The transfer takes 3-5 days. With crypto, it arrives in minutes and costs a few cents. That difference—that percentage the middleman takes—is food on the table for his kids. **Censorship resistance.** Activists in authoritarian countries need to fund their operations. Journalists need to receive payments without their government knowing. Dissidents need to move money across borders. Traditional banking makes this nearly impossible when the government controls the banks. This isn't theoretical. These are real problems affecting real people right now. Crypto doesn't solve all of them perfectly (yet), but it offers something that didn't exist before: **a financial system that doesn't require anyone's permission to participate in.** ## So Why Should You Care? Maybe you don't live in Venezuela. Maybe you have a perfectly good bank account. Maybe the existing financial system works fine for you. Here's why you should still pay attention: **The internet had the same skeptics.** In the mid-90s, plenty of smart people said the internet was a fad for nerds. They had phones and fax machines—why would they need email? Fast forward 30 years and try living a single day without the internet. Crypto is at that same inflection point. **Your money is losing value right now.** That savings account paying 0.5% interest while inflation runs at 3-6%? You're getting poorer every year. The [purchasing power of the dollar has dropped over 96% since 1913](https://www.officialdata.org/us/inflation/1913?amount=1). The system is designed to slowly erode the value of your savings because it encourages spending and borrowing—which is great for the economy, not so great for your future. **The financial rails are showing their age.** It's 2026. You can stream 4K video to your phone in real-time, but sending money internationally still takes 3-5 business days and costs a fortune. Bank transfers fail on weekends. Credit card chargebacks take months. The payment infrastructure most of the world runs on was designed in the 1970s and it shows. **Understanding crypto is understanding the future of money.** Whether crypto replaces traditional finance entirely (unlikely) or becomes a major component of it (very likely), understanding how it works gives you an edge. It's financial literacy for the 21st century. > **Here's my take:** You don't need to go all-in on crypto. You don't need to become a Bitcoin maximalist or put your life savings into tokens. But understanding how this technology works and why it exists? That's not optional anymore. It's like understanding how the internet works in 2000—the people who got it early had a massive advantage. ## The Big Picture Let's zoom out and summarize what we've covered: 1. **Money is a technology**, and like all technology, it evolves. We went from shells to gold to paper to digital to crypto. 2. **The current system requires trust** in governments, banks, and institutions. That trust has been broken repeatedly. 3. **The 2008 crisis** was the catalyst. Bitcoin was born as a direct response to institutional failure. 4. **Decentralization** means no single entity controls the system. Thousands of participants share the responsibility. 5. **"Trustless" is a feature**, not a bug. You don't need to trust anyone because the rules are enforced by code and math. 6. **"Be your own bank" matters** for billions of people who are underserved, censored, or exploited by traditional finance. 7. **This affects you** even if you think it doesn't. Understanding crypto is understanding the future of money. None of this means crypto is perfect. It has real problems—scams, volatility, environmental concerns, complexity. We'll cover all of those honestly throughout this series. But understanding *why* it exists is the foundation everything else builds on. The financial system wasn't broken for everyone. But it was broken for enough people that someone decided to build an alternative. And that alternative turned into a multi-trillion dollar ecosystem that's reshaping how the world thinks about money. ## What's Next Now that you understand *why* crypto exists, it's time to understand *how* it started. In **[Part 2: Bitcoin — The One That Started It All](/blog/crypto-unlocked-02-bitcoin-digital-gold)**, we'll dive deep into how Bitcoin actually works. We'll demystify mining, explain what the blockchain really is (hint: it's just a fancy spreadsheet), and understand why a currency with no CEO, no headquarters, and no employees is worth over a trillion dollars. No math degree required. Just bring curiosity.
[Series Index](/blog/series/crypto-unlocked) · [Next: Bitcoin — Digital Gold](/blog/crypto-unlocked-02-bitcoin-digital-gold) →
--- --- # Crypto Unlocked Part 2: Bitcoin — Digital Gold URL: /blog/crypto-unlocked-02-bitcoin-digital-gold Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Bitcoin, Blockchain, Mining, Beginners Series: Crypto Unlocked (Part 2 of 21) --- How Bitcoin actually works under the hood. Mining, halving, proof of work, and why a 21 million cap makes it the hardest money ever created. On May 22, 2010, a programmer named [Laszlo Hanyecz](https://en.wikipedia.org/wiki/Laszlo_Hanyecz) paid 10,000 Bitcoin for two Papa John's pizzas. At the time, that seemed like a reasonable deal—those coins were worth about $40 total. Today, those same 10,000 BTC are worth **hundreds of millions of dollars** — and briefly topped **one billion** when Bitcoin crossed $100K in late 2024. That's the most expensive pizza order in human history. Every year, the crypto community celebrates May 22nd as ["Bitcoin Pizza Day"](https://en.wikipedia.org/wiki/History_of_bitcoin#2010)—part joke, part monument to how far this thing has come. But here's the thing most people miss about that story: **it proved Bitcoin actually worked as money.** Someone offered it, someone accepted it, value transferred from one person to another without a bank in sight. In [Part 1](/blog/crypto-unlocked-01-why-crypto-exists), we covered *why* crypto exists—the broken trust, the bailouts, the 2008 meltdown. Now let's get into the *how*. How does Bitcoin actually work? What are miners doing? Why does everyone keep talking about "the halving"? And why do some very serious people call it digital gold? No computer science degree required. Just stay with me. (And if you *do* want to go to the source, here's the [original Bitcoin whitepaper](https://bitcoin.org/bitcoin.pdf) — it's only 9 pages.) ## How Bitcoin Transactions Actually Work Forget everything you think you know about digital payments. Bitcoin doesn't work like Venmo, PayPal, or your bank's app. There's no company in the middle moving numbers around in a database. Here's how it works in plain English: [code block] **You have a wallet.** Think of it as a lockbox with two keys. One key is public—it's basically your address, like an email address anyone can send Bitcoin to. The other key is private—it's the password that proves you own what's in the lockbox. **Never share your private key.** Ever. We'll drill into this in [Part 3](/blog/crypto-unlocked-03-wallets-keys-self-custody). **You broadcast a transaction.** When you want to send Bitcoin to someone, you create a message that essentially says: "I'm sending 0.5 BTC from my address to this other address." You sign that message with your private key (proving you own the funds) and broadcast it to the Bitcoin network. **The network verifies it.** Thousands of computers around the world receive your transaction, check your signature, confirm you actually have the Bitcoin you're trying to send, and—if everything checks out—add it to a queue of pending transactions. **Miners bundle it into a block.** Every ~10 minutes, miners collect a batch of pending transactions, package them into a "block," and compete to add that block to the permanent record. More on this in a moment. That's it. No bank approving your transaction. No three-day waiting period. No "business hours." Bitcoin works 24/7, 365 days a year, and doesn't care whether you're sending $5 or $5 million. ## The Blockchain: A Public Ledger Everyone Can Verify The word "blockchain" gets thrown around like it's some mystical technology. It's not. **It's a spreadsheet.** A really big, really clever spreadsheet. Imagine a notebook where every transaction ever made is written down in order. Every 10 minutes or so, someone tears off a page (that's a "block"), stamps it with a unique seal, and chains it to the previous page (that's the "chain"). Every page references the seal of the page before it, which means you can't rip out a page or change a past entry without breaking the chain. Now imagine there are **thousands** of copies of this notebook, spread across computers all over the world. They all contain the exact same information. If someone tries to forge an entry in their copy, every other copy says "nope, that doesn't match." The lie gets rejected immediately. That's the blockchain: - **Transparent** — anyone can read it. Right now. Go to [mempool.space](https://mempool.space) or [blockstream.info](https://blockstream.info) and you can see every Bitcoin transaction happening in real-time - **Immutable** — once a transaction is recorded, it can't be changed or deleted - **Distributed** — no single company or server controls it. Thousands of independent computers (called "nodes") all maintain their own copy - **Trustless** — you don't need to trust any single participant because *everyone* is watching everyone else > **Think about it this way:** Your bank's ledger is a private diary locked in a vault. Bitcoin's blockchain is a diary written on a billboard in the town square—everyone can see it, everyone can verify it, and nobody can erase it. ## Mining Explained: What Miners Actually Do This is where most people's eyes glaze over. But I promise it's simpler than you think. Remember how transactions get bundled into blocks every ~10 minutes? Someone has to do that bundling. That someone is a "miner." But they don't just bundle transactions—they have to **earn the right** to add the next block. And they earn that right by solving a puzzle. Here's the best analogy I've got: **Imagine a room full of people, each rolling a massive set of dice.** The first person to roll a number below a certain target wins. There's no skill involved—you can't get "better" at rolling dice. You can only roll *faster* and *more often.* The person with the most dice (computing power) has the best odds, but even someone with a single die *could* win. It's a race of brute computational force. When a miner "solves" the puzzle (finds a valid number), they broadcast their block to the network. Everyone else checks the answer (which is trivially easy to verify even though it was brutally hard to find), and if it's valid, the block gets added to the chain. **The reward?** The winning miner gets two things: 1. **Newly created Bitcoin** — currently 3.125 BTC per block (worth roughly $300,000+ at today's prices) 2. **Transaction fees** — small fees paid by everyone whose transaction was included in that block This is the *only* way new Bitcoin enters existence. There's no central bank printing it. No company issuing it. It's created through work—raw, verifiable, computational work. ![Bitcoin mining farm — rows of specialized ASIC miners in a data center](/assets/blog/crypto-unlocked-02/bitcoin-mining-farm.jpg) *A Bitcoin mining facility in Medicine Hat, Alberta. Each container houses hundreds of specialized ASIC miners.* ## Proof of Work — Why It's Energy-Intensive and Why That Matters That dice-rolling competition? It's called **[Proof of Work (PoW)](https://en.wikipedia.org/wiki/Proof_of_work)**, and it's the most controversial aspect of Bitcoin. Here's why it exists: **you need skin in the game.** If adding blocks to the blockchain were free and easy, anyone could spam fake blocks, try to rewrite history, or double-spend their coins. Proof of Work makes cheating insanely expensive. To fake a transaction, you'd need to control more than 50% of all the computing power in the network—which, at Bitcoin's scale, would cost billions of dollars in hardware and electricity. **The energy criticism is real.** Bitcoin mining consumes roughly as much electricity as a mid-sized country. Critics call it wasteful. And they have a point—if you measure value by electricity consumed, Bitcoin looks expensive. **But here's the counter-argument:** What secures the global banking system? Thousands of bank branches, millions of employees, armored trucks, data centers, ATM networks, fraud departments, regulatory bodies, and military forces backing government currencies. Nobody tallies *that* energy bill. Bitcoin replaced all of that with math and electricity. Is there room for improvement? Absolutely. And other cryptocurrencies (like Ethereum, which we'll cover in [Part 5](/blog/crypto-unlocked-05-solana-speed-at-scale)) have found less energy-intensive alternatives. But for Bitcoin specifically, the energy expenditure *is* the security. That's a feature, not a bug. > **Hot take:** The energy debate is important, but it often gets weaponized by people who don't apply the same scrutiny to the traditional financial system. Both use enormous resources. Only one of them lets you verify every single transaction yourself. ## The Halving: Bitcoin's Built-In Scarcity Engine Every four years (roughly every 210,000 blocks), something remarkable happens: **the reward miners get for each block is cut in half.** This is called ["the halving,"](https://www.bitcoinblockhalf.com/) and it's one of the most elegant mechanisms in all of economics. Here's the timeline: - **2009:** Mining reward = 50 BTC per block - **November 2012:** First halving → 25 BTC - **July 2016:** Second halving → 12.5 BTC - **May 2020:** Third halving → 6.25 BTC - **April 2024:** Fourth halving → 3.125 BTC - **~2028:** Fifth halving → 1.5625 BTC - **~2140:** Final Bitcoin mined. Reward = 0 BTC See what's happening? The supply of new Bitcoin entering the market **keeps shrinking.** Like a faucet being slowly turned off. Meanwhile, demand has generally been increasing as more people, companies, and even governments adopt Bitcoin. Historically, each halving has preceded a significant bull run. Not immediately—usually 6-18 months later. Past performance isn't a guarantee, obviously. But the economic logic is sound: **if supply decreases while demand stays the same or increases, price goes up.** That's not crypto magic—that's Econ 101. The next halving is expected around **April 2028**. Mark your calendar. Or don't—the crypto community will not let you forget about it. You can track the countdown live at [bitcoinblockhalf.com](https://www.bitcoinblockhalf.com/). ## 21 Million: The Hardest Money Ever Created Here's where Bitcoin gets truly interesting from a monetary perspective. **There will only ever be 21 million Bitcoin.** Not 21 million and one. Not "well, we might adjust that later." Twenty-one million. Period. It's written into the code, enforced by every node on the network, and mathematically guaranteed by the halving schedule. Compare that to fiat money: - The US Federal Reserve created **$4.6 trillion** in 2020 alone in response to COVID - The [M2 money supply](https://fred.stlouisfed.org/series/M2SL) (a measure of all dollars in existence) has roughly **quadrupled** since 2000 - Every major central bank in the world has the ability to print unlimited currency whenever they decide it's necessary When governments print money, your existing money buys less. That $100 in your savings account from 2020 buys roughly $80 worth of stuff today. You didn't spend it. You didn't lose it. The government just made more of it, diluting yours like adding water to wine. **Bitcoin can't be diluted.** Nobody can print more of it. Nobody can change the 21 million cap without convincing the majority of the global network to agree—which is about as likely as convincing every country on Earth to simultaneously switch to a new language. This is what people mean when they call Bitcoin **"hard money"** or **"sound money."** It's programmatically scarce. No human decision can change that. And in a world where every government has a printing press, that's a genuinely new thing. > **Perspective:** About [19.9 million Bitcoin have already been mined](https://www.coindesk.com/price/bitcoin/). With ~8 billion people on Earth, if everyone wanted some, there's only about 0.0025 BTC per person. And an estimated 3-4 million BTC are lost forever (forgotten passwords, dead owners, [Satoshi's untouched stash](https://en.wikipedia.org/wiki/Satoshi_Nakamoto)). The actual available supply is even smaller than you think. ## Bitcoin as "Digital Gold" — The Store of Value Thesis Gold has been valuable for thousands of years. Why? It's scarce (you can't make more), it's durable (it doesn't rust or decay), it's divisible (you can melt it into smaller pieces), it's portable (sort of), and it's universally recognized. Bitcoin has every single one of those properties—and then some: - **Scarce** — Gold: Yes (limited supply on Earth) · Bitcoin: Yes (21M cap, mathematically enforced) - **Durable** — Gold: Yes (doesn't decay) · Bitcoin: Yes (exists as long as the network runs) - **Divisible** — Gold: Somewhat (hard to split a gold bar) · Bitcoin: Extremely (divisible to 8 decimal places—0.00000001 BTC is called a "satoshi") - **Portable** — Gold: Barely (try flying with $1M in gold) · Bitcoin: Completely (send $1B across the world in minutes) - **Verifiable** — Gold: Hard (need an expert to detect fakes) · Bitcoin: Trivial (the blockchain verifies everything) - **Seizure-resistant** — Gold: No (governments confiscate gold routinely) · Bitcoin: Very (if you hold your own keys) The "digital gold" thesis is simple: **Bitcoin is a better store of value than gold for the digital age.** It's harder to confiscate, easier to transfer, impossible to counterfeit, and provably scarce. This isn't just internet speculation. [BlackRock](https://www.blackrock.com/)—the largest asset manager in the world with over $10 trillion under management—launched a [Bitcoin ETF (IBIT)](https://www.blackrock.com/us/individual/products/ibit-ishares-bitcoin-trust) in January 2024. Nation-states are adding Bitcoin to their reserves. [Strategy (formerly MicroStrategy)](https://en.wikipedia.org/wiki/MicroStrategy) holds over 650,000 BTC — tens of billions of dollars on their balance sheet — making them the largest corporate Bitcoin holder in the world. When the biggest, most conservative financial players in the world start buying, it's worth paying attention. ## A Brief, Wild History Bitcoin's journey from cypherpunk experiment to trillion-dollar asset is one of the wildest rides in financial history: - **2009:** [Satoshi Nakamoto](https://en.wikipedia.org/wiki/Satoshi_Nakamoto) mines the [genesis block](https://en.wikipedia.org/wiki/Genesis_block) on January 3rd. Bitcoin is worth $0. - **2010:** Pizza Day. 10,000 BTC for two pizzas. First real-world transaction. - **2011:** Bitcoin hits $1 for the first time. Then $31. Then crashes back to $2. Welcome to crypto. - **2013:** Bitcoin reaches $1,000. The world starts paying attention. - **2014:** [Mt. Gox](https://en.wikipedia.org/wiki/Mt._Gox), the largest Bitcoin exchange handling 70% of all trades, collapses. 850,000 BTC go missing (200,000 later recovered). Trust shattered. Price crashes. Many declare Bitcoin dead (for the first of about 400 times). - **2017:** The ICO bubble. Bitcoin hits $20,000. Your Uber driver is talking about crypto. Then it crashes 84%. - **2020-2021:** Institutional adoption begins. Tesla buys $1.5B in Bitcoin. [El Salvador becomes the first country to make it legal tender](https://en.wikipedia.org/wiki/Bitcoin_in_El_Salvador) (later reversed in 2025). Bitcoin hits $69,000. - **2024:** [Spot Bitcoin ETFs approved](https://www.sec.gov/newsroom/speeches-statements/gensler-statement-spot-bitcoin-011023) in January. The fourth halving happens in April. Bitcoin crosses **$100,000** for the first time in December. The "internet money" is now a mainstream financial asset. Every crash was declared the end. Every recovery proved the skeptics wrong. Bitcoin has been pronounced dead hundreds of times and keeps coming back stronger. That resilience isn't just price action—it's a network effect that gets harder to kill the bigger it grows. ## What Bitcoin Isn't (Let's Be Honest) No honest guide would skip the limitations: - **It's not fast for payments.** Bitcoin processes ~7 transactions per second. Visa does ~65,000. Layer 2 solutions like the [Lightning Network](https://lightning.network/) are improving this, but it's not there yet for buying your morning coffee. - **It's volatile.** Dropping 30-50% in a matter of weeks is *normal* for Bitcoin. If that makes you queasy, that's important to know before you invest. - **It's not anonymous.** It's *pseudonymous*. Your identity isn't attached to your address, but every transaction is public. If someone links your identity to an address, they can trace everything. - **It's not easy for beginners.** Lose your private key? Your Bitcoin is gone forever. No customer support. No "forgot password." We'll solve this in [Part 3](/blog/crypto-unlocked-03-wallets-keys-self-custody). Understanding what Bitcoin *can't* do is just as important as understanding what it can. It's a store of value and a settlement network, not a competitor to Apple Pay. At least not yet. ## What's Next You now understand how Bitcoin works under the hood—the blockchain, mining, proof of work, the halving, and the 21 million cap. You know why people call it digital gold and why that comparison holds up better than most critics admit. But here's the thing: **none of this matters if you can't secure your own Bitcoin.** And the way most people store crypto today—on exchanges—is about as safe as leaving cash on someone else's kitchen table and hoping they don't touch it. In **[Part 3: Wallets & Self-Custody](/blog/crypto-unlocked-03-wallets-keys-self-custody)**, we'll cover how to actually hold your own crypto. Hot wallets vs. cold wallets, seed phrases, hardware devices, and the golden rule: **not your keys, not your coins.** This is the most important practical lesson in the entire series. Don't skip it.
← [Previous: Why Crypto Exists](/blog/crypto-unlocked-01-why-crypto-exists) · [Series Index](/blog/series/crypto-unlocked) · [Next: Wallets, Keys & Self-Custody](/blog/crypto-unlocked-03-wallets-keys-self-custody) →
--- --- # Crypto Unlocked Part 3: Wallets, Keys & Self-Custody URL: /blog/crypto-unlocked-03-wallets-keys-self-custody Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Wallets, Security, Self-Custody, Beginners Series: Crypto Unlocked (Part 3 of 21) --- Public keys, private keys, seed phrases, and why 'not your keys, not your coins' is the most important lesson in crypto. In November 2022, over a million people woke up to find their crypto gone. Not hacked. Not stolen by some shadowy figure in a hoodie. Just... *gone*. The exchange they trusted — [FTX](https://en.wikipedia.org/wiki/FTX_(company)), run by the curly-haired golden boy of crypto — had been gambling with their money behind the scenes. Billions of dollars, [evaporated overnight](https://www.reuters.com/business/finance/ftx-crypto-exchange-files-bankruptcy-2022-11-11/). The cruel irony? Every single one of those people could have prevented it. Not with better research, not with insider knowledge — just by holding their own keys. That's what this chapter is about. Keys, wallets, and why taking custody of your own crypto is the single most important thing you'll learn in this entire series. ## Your Keys, Explained (With a Simple Analogy) Let's start with the basics. In crypto, you have two keys: - **Public key** — This is your address. Think of it like your email address. You can share it freely. People need it to send you crypto. It looks something like `0x7a16fF8270133F063aAb6C9977183D9e72835428` (yes, it's ugly — that's normal). - **Private key** — This is your password. Except it's way, *way* more important than any password you've ever had. If someone gets your private key, they own your crypto. Period. No customer support to call. No "forgot password" link. No bank to reverse the transaction. It's gone. Here's where the analogy breaks down, though — and this is crucial to understand: > **With your email, you can always reset your password. With crypto, there is no reset. Your private key IS your ownership. Lose it or leak it, and it's game over.** You don't actually "store" crypto in a wallet the way you store cash in a physical wallet. Your crypto lives on the blockchain (that shared ledger we talked about in [Part 1](/blog/crypto-unlocked-01-why-crypto-exists)). Your private key is just the proof that you're allowed to move it. The wallet is really just software that manages your keys for you. ## Seed Phrases: The Master Key Now, private keys are long, random strings of characters. Impossible to remember. So the crypto world came up with something more human-friendly: **seed phrases** (also called recovery phrases or mnemonic phrases). When you create a new wallet, you'll be shown 12 or 24 random English words. Something like: [code block] These words **are** your wallet. More precisely, they're a human-readable encoding of your private key. From these 12 words, your wallet software can mathematically derive every private key and address your wallet will ever use. This means: - ✅ Write them down → you can restore your wallet on any device, anytime - ❌ Lose them → you lose access to your crypto forever - ❌ Someone else gets them → they can steal everything in seconds ![Seed phrase backup on a metal plate — fire-proof, water-proof, and offline](/assets/blog/crypto-unlocked/seed-phrase-backup.jpg) > **💡 Tip:** Write your seed phrase on paper. Not in your Notes app. Not in a screenshot. Not in an email to yourself. Paper. Maybe two copies, stored in different physical locations. Some people even engrave them on metal plates to survive fire and water damage. That's not paranoia — that's good practice. ## Hot Wallets vs. Cold Wallets ![Hot wallets (software on your phone) vs. cold wallets (hardware devices that stay offline)](/assets/blog/crypto-unlocked/hot-vs-cold-wallets.jpg) [code block] Wallets come in two flavors, and understanding the difference matters: ### Hot Wallets (Software Wallets) These are apps on your phone or browser extensions on your computer. They're "hot" because they're connected to the internet. **Popular hot wallets:** - **[MetaMask](https://metamask.io)** — The OG browser wallet. Works with Ethereum and most EVM-compatible chains. Browser extension + mobile app. - **[Rabby](https://rabby.io)** — A newer, slicker alternative to MetaMask with better security warnings and multi-chain support. My personal daily driver. - **[Phantom](https://phantom.com)** — Started on Solana, now supports Ethereum and Bitcoin too. Clean interface, great mobile app. **Pros:** Free, convenient, instant access, easy to use **Cons:** Connected to the internet = more vulnerable to hacks, malware, and phishing Hot wallets are great for day-to-day crypto activity — interacting with apps, swapping tokens, exploring DeFi. Think of them as the cash in your pocket. ### Cold Wallets (Hardware Wallets) These are physical devices — usually small USB-like gadgets — that store your private keys offline. They're "cold" because they never touch the internet directly. **Popular cold wallets:** - **[Ledger](https://www.ledger.com)** (Nano S Plus, Nano X, Stax) — The market leader. Sleek hardware, solid app ecosystem. Had a controversial [data breach of customer *shipping addresses*](https://www.ledger.com/blog/update-efforts-to-protect-your-data-and-prosecute-the-scammers) (not keys) in 2020, and a [firmware controversy](https://www.ledger.com/blog/part-4-genesis-of-ledger-recover-self-custody-without-compromise) around Ledger Recover in 2023, but the actual key security has held up. - **[Trezor](https://trezor.io)** (Model One, Model T, Safe 3) — Open-source firmware, which the security community loves. Strong track record. **Pros:** Keys never leave the device, immune to remote hacks, the gold standard for security **Cons:** Cost money (~€60-200), less convenient for frequent transactions, you can still lose the physical device > **💡 Rule of thumb:** If you wouldn't walk around with that amount of cash in your pocket, it shouldn't be in a hot wallet. Hardware wallet for savings, hot wallet for spending money. ## "Not Your Keys, Not Your Coins" This is the most-repeated phrase in crypto, and after FTX, nobody argues with it anymore. Here's the deal. When you buy crypto on an exchange like [Coinbase](https://www.coinbase.com), [Binance](https://www.binance.com), or [Kraken](https://www.kraken.com), you don't actually hold that crypto. The exchange does. They have the private keys. You have an IOU — a balance on their platform that says "we owe you 0.5 BTC." That's **custodial** storage. They're the custodian. You're trusting them. And most of the time? It works fine. These are big companies with security teams and insurance policies. But "most of the time" isn't "all of the time." ### The FTX Disaster FTX was the third-largest crypto exchange in the world by volume. Celebrity endorsements. Super Bowl ads. Sam Bankman-Fried was on magazine covers, advising Congress, being called the "next Warren Buffett." Behind the scenes, FTX was funneling customer deposits — billions of dollars — to prop up risky bets at their sister trading firm, [Alameda Research](https://en.wikipedia.org/wiki/Alameda_Research). When [CoinDesk revealed](https://www.coindesk.com/business/2022/11/02/divisions-in-sam-bankman-frieds-crypto-empire-blur-on-his-trading-titan-alameda-s-balance-sheet/) that Alameda's balance sheet was heavily dependent on FTX's own FTT token, a bank run followed — and the money simply wasn't there. **Result:** An [$8 billion hole](https://en.wikipedia.org/wiki/Bankruptcy_of_FTX) in customer funds, gone. People who had their life savings on FTX couldn't withdraw a single dollar. Bankman-Fried was [convicted of fraud](https://www.nytimes.com/2023/11/02/technology/sam-bankman-fried-fraud-trial-ftx.html) in November 2023, and the bankruptcy process dragged on for years. The people who had moved their crypto to their own wallets? They were fine. Completely unaffected. Because *they* held the keys. ### Custodial vs. Non-Custodial ![Custodial storage means someone else holds your keys — self-custody means you hold them yourself](/assets/blog/crypto-unlocked/custodial-vs-self-custody.png) Let me make this crystal clear: - **Who holds the keys?** — Custodial: The exchange · Non-Custodial: You - **Can you be frozen out?** — Custodial: Yes · Non-Custodial: No - **Recovery if you lose access?** — Custodial: Customer support · Non-Custodial: Seed phrase only - **Risk** — Custodial: Exchange hack, fraud, bankruptcy · Non-Custodial: Losing your seed phrase, personal security - **Example** — Custodial: Coinbase, Binance, Kraken · Non-Custodial: MetaMask, Ledger, Phantom Neither approach is inherently "wrong." Keeping some crypto on a reputable exchange is fine, especially if you're actively trading. But for long-term holdings — for anything you'd be devastated to lose — self-custody is the way. ## Security Best Practices Alright, real talk. Here's how to not get rekt: ### The Non-Negotiables 1. **Never, ever share your seed phrase.** No legitimate service, wallet, or person will ever ask for it. If someone asks, it's a scam. 100% of the time. 2. **Write it on paper (or metal), store it offline.** Not on your computer. Not in the cloud. Not in a photo. 3. **Use a hardware wallet for significant amounts.** "Significant" is subjective — but if losing it would hurt, it's significant. 4. **Double-check addresses before sending.** Crypto transactions are irreversible. Send a small test amount first if you're nervous. There's no shame in that. 5. **Use a separate browser profile for crypto.** Keep your wallet extension isolated from your everyday browsing. ### Common Scams to Watch For The crypto space is unfortunately rich with people trying to separate you from your money. Here are the big ones: - **Fake wallet apps** — Scammers create copycat wallet apps on app stores. Always download from the official website, never from a random link. Check the developer name, review count, and URL carefully. - **Phishing sites** — You get a DM or email: "Your wallet has been compromised! Click here to secure your funds." The link takes you to a site that looks exactly like MetaMask or your exchange. You enter your seed phrase, and it's over. **No legitimate service will ask you to "verify" or "validate" your wallet via a link.** - **Social engineering** — "Hey, I'm from MetaMask support" in your Discord DMs. No, they're not. Wallet companies don't DM you. Ever. - **Approval scams** — You connect your wallet to a malicious website and approve a transaction you don't fully understand. That approval lets the site drain your tokens. Always read what you're signing, and revoke old approvals regularly (tools like [revoke.cash](https://revoke.cash) help with this). - **Clipboard malware** — You copy a wallet address, malware swaps it with the attacker's address, and you send funds to the wrong place. Always verify the first AND last few characters of any address you paste. > **💡 The golden rule of crypto security:** Slow down. Scammers rely on urgency. "Act now or lose your funds!" is almost always a scam. Real security issues don't require you to enter your seed phrase into a website. ## Getting Started: Your First Wallet Ready to set one up? Here's the simplest path: 1. **Download a hot wallet** — I'd recommend [Rabby](https://rabby.io) or [MetaMask](https://metamask.io). Go directly to the official website. Don't Google it and click an ad — [fake wallet scams are rampant](https://support.metamask.io/more-web3/staying-safe/will-metamask-ever-ask-me-to-verify-my-account/). 2. **Create a new wallet** — The app will generate your seed phrase. Write it down immediately. On paper. 3. **Verify your seed phrase** — The app will quiz you on it. This isn't busy work — it's making sure you actually wrote it down. 4. **Store your seed phrase safely** — Somewhere secure, offline, where you won't lose it and nobody else can find it. 5. **Optional but recommended** — Once you have meaningful amounts, invest in a [Ledger](https://www.ledger.com) or [Trezor](https://trezor.io) and move your long-term holdings there. That's it. You now have a non-custodial wallet. You hold the keys. You own the coins. Nobody — no government, no company, no hacker on the other side of the world — can touch your crypto without your private key. That's the promise of self-custody. It's also the responsibility. And in crypto, those two things are inseparable. ## What's Next? You've got your wallet set up. You understand keys, seed phrases, and why self-custody matters. But so far, we've mostly talked about crypto as *money* — something you send and receive. In **[Part 4](/blog/crypto-unlocked-04-ethereum-smart-contracts)**, we're going to blow the doors open. We'll dive into **Ethereum and Smart Contracts** — where crypto stops being just digital cash and becomes a programmable platform that can run code, enforce agreements, and power entirely new kinds of applications. This is where things get really interesting. See you there. 🔑
← [Previous: Bitcoin — Digital Gold](/blog/crypto-unlocked-02-bitcoin-digital-gold) · [Series Index](/blog/series/crypto-unlocked) · [Next: Ethereum & Smart Contracts](/blog/crypto-unlocked-04-ethereum-smart-contracts) →
--- --- # Crypto Unlocked Part 4: Ethereum & Smart Contracts URL: /blog/crypto-unlocked-04-ethereum-smart-contracts Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Ethereum, Smart Contracts, EVM, Beginners Series: Crypto Unlocked (Part 4 of 21) --- Ethereum turned blockchain from a payment network into a programmable world computer. Here's how smart contracts work and why they changed everything. ![Ethereum — the world computer](/assets/blog/crypto-unlocked-04-ethereum-logo.png) If Bitcoin is a calculator, Ethereum is a full-blown computer. That's the single most important thing to understand about Ethereum. Bitcoin does one thing really well: it moves value from A to B without a middleman. That's incredible. But it's also... kind of it. Bitcoin's scripting language is intentionally limited. You can send money, you can lock money with conditions, and that's about where the party ends. Ethereum looked at that and said: "What if the blockchain could run *any* program?" And that question changed everything. ## The Kid Who Wanted Programmable Money In 2013, a 19-year-old Russian-Canadian programmer named [Vitalik Buterin](https://en.wikipedia.org/wiki/Vitalik_Buterin) published a [whitepaper](https://ethereum.org/en/whitepaper/). He'd been deep in the Bitcoin world — co-founding [Bitcoin Magazine](https://bitcoinmagazine.com/) as a teenager — but he kept running into the same wall. Every time someone wanted to build something new on a blockchain, they had to create an entirely new blockchain. Want decentralized betting? New chain. Want to issue tokens? New chain. Want domain names on-chain? You guessed it. Vitalik's insight was elegant: instead of building a new blockchain for every application, build *one* blockchain that can run any application. As he later wrote on [his blog](https://vitalik.eth.limo/), the goal was a platform with a built-in Turing-complete programming language. A general-purpose world computer, secured by the same cryptographic principles as Bitcoin, but infinitely more flexible. Ethereum [launched on July 30, 2015](https://ethereum.org/en/history/#frontier). The crypto world hasn't been the same since. ## Smart Contracts: The Vending Machine Analogy Here's where people's eyes start to glaze over, so let me keep this dead simple. A **smart contract** is just a program that lives on the blockchain. It has rules baked into its code, it executes automatically when conditions are met, and — here's the key part — no human needs to be involved. Think of a vending machine: 1. You put money in 2. You select what you want 3. The machine checks if you paid enough 4. If yes → it gives you the item 5. If no → it returns your money No cashier. No negotiation. No trust required. The rules are the rules, and the machine enforces them. [code block] Smart contracts work exactly the same way, except instead of candy bars, they handle money, tokens, property rights, votes, insurance payouts — really anything you can express as logic. > **The "smart" in smart contracts doesn't mean AI-smart.** It means self-executing. The contract does exactly what it's programmed to do, every single time, without anyone being able to tamper with it. That's the magic. Here's a simple example in plain English: - **IF** Alice sends 1 ETH to this contract - **AND** the date is after January 1st, 2026 - **THEN** send that 1 ETH to Bob - **ELSE** return it to Alice Once deployed, nobody can change those rules. Not Alice, not Bob, not even the person who wrote the contract. It just... runs. On thousands of computers simultaneously, all verifying each other. ## The EVM: Ethereum's Engine So where do these smart contracts actually *run*? On the **[Ethereum Virtual Machine](https://ethereum.org/en/developers/docs/evm/)** — the EVM. Think of the EVM as a giant, decentralized computer. Every node (computer) on the Ethereum network runs a copy of the EVM. When you deploy a smart contract, every node gets a copy. When someone interacts with that contract, every node executes the code and agrees on the result. This is wildly inefficient by traditional computing standards. Your laptop could run these programs a million times faster than Ethereum can. But that's not the point. The point is that **no single entity controls the computer**. No one can shut it down, censor it, or change the rules after the fact. You're trading speed for trust. And in many cases, that's a trade worth making. > **Developer note:** Smart contracts on Ethereum are typically written in a language called **[Solidity](https://soliditylang.org/)** (looks a bit like JavaScript). The code gets compiled into bytecode that the EVM can execute. If you've ever written code, you could learn Solidity in a weekend. If you haven't, don't worry — you don't need to write smart contracts to use them. ## Gas Fees: Why Using Ethereum Costs Money ![Ethereum gas fees fluctuate with network demand](/assets/blog/crypto-unlocked-04-gas-fees.png) Here's something that trips up newcomers: every action on Ethereum costs money. Sending ETH? Costs money. Interacting with a smart contract? Costs money. Deploying a new contract? Costs *a lot* of money. This cost is called **[gas](https://ethereum.org/en/developers/docs/gas/)**, and it exists for a very good reason. Remember, every node on the network has to execute your transaction. If using Ethereum were free, someone could write an infinite loop and grind the entire network to a halt. Gas is the defense mechanism — it puts a price on computation, so every operation costs something. Here's how it works: - Every operation (adding numbers, storing data, transferring tokens) has a **gas cost** measured in small units - You pay for gas in **ETH** (Ethereum's native currency) - The **gas price** fluctuates based on network demand — busy network = expensive, quiet network = cheap - You set a **gas limit** (the max you're willing to pay) and a **priority fee** (a tip to validators to process your transaction faster) When Ethereum gets busy — say during an NFT mint or a market crash — gas fees can spike to absurd levels. People have paid hundreds of dollars for a single transaction. This is Ethereum's biggest pain point, and a huge reason why scaling solutions exist (more on that later). > **Tip:** Never send a transaction during peak congestion unless it's urgent. Tools like [etherscan.io/gastracker](https://etherscan.io/gastracker) show current gas prices. Early mornings (UTC) and weekends tend to be cheapest. ## The Merge: Ethereum's Biggest Upgrade For its first seven years, Ethereum used **Proof of Work** — the same energy-hungry mining process as Bitcoin. Warehouses full of GPUs, burning electricity to solve puzzles. On September 15, 2022, that changed overnight. **[The Merge](https://ethereum.org/en/roadmap/merge/)** was Ethereum's transition from Proof of Work to **[Proof of Stake](https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/)**. Instead of miners competing with computing power, validators now stake (lock up) 32 ETH as collateral and take turns proposing and verifying blocks. ![The Merge: Ethereum's transition from Proof of Work to Proof of Stake](/assets/blog/crypto-unlocked-04-the-merge.jpg) What changed: - **Energy usage dropped ~99.95%** — Ethereum went from consuming as much electricity as a small country to roughly the same as a few thousand homes - **No more mining** — you don't need expensive hardware, just ETH to stake - **New ETH issuance dropped ~90%** — far fewer new coins enter circulation - **Security model shifted** — instead of attacking with computing power, an attacker would need to buy and stake massive amounts of ETH (making attacks self-destructive, since their stake gets slashed) The Merge was one of the most impressive technical feats in crypto history. Imagine swapping out the engine of a plane mid-flight, while carrying billions of dollars in cargo. That's essentially what happened — and it worked flawlessly. ## ETH as Money: Staking and the Burn After The Merge, ETH became a fundamentally different asset. Two mechanics make it interesting: ### Staking Rewards If you stake 32 ETH (or use a liquid staking service with any amount), you earn roughly 3-4% annually for helping secure the network. This turned ETH into a yield-bearing asset — you get paid just for holding and staking it. ### EIP-1559: The Fee Burn Since the [London hard fork in August 2021](https://eips.ethereum.org/EIPS/eip-1559), a portion of every transaction fee (the base fee) gets **burned** — permanently destroyed. When network activity is high enough, more ETH gets burned than created, making ETH *deflationary*. The total supply actually shrinks. This is a big deal. Bitcoin has a fixed supply cap (21 million). ETH doesn't have a hard cap, but with the burn mechanism, it can actually *decrease* in supply over time. Some people call this "ultrasound money" — a riff on Bitcoin's "sound money" narrative, taken one step further. > **In simple terms:** Bitcoin is digital gold with a fixed supply. ETH is more like digital oil that powers a computer — but the oil occasionally gets burned faster than it's produced, making it scarcer over time. ## Real-World Smart Contracts in Action Smart contracts aren't just a cool idea. They've spawned entire industries: - **Tokens (ERC-20):** Anyone can create a new currency or asset on Ethereum with a simple smart contract. This is how thousands of tokens — from stablecoins like USDC to governance tokens like UNI — were born - **NFTs (ERC-721):** Non-fungible tokens are smart contracts that represent unique digital items. Art, music, game items, event tickets — all just smart contracts under the hood - **Decentralized Exchanges (DEXs):** Platforms like [Uniswap](https://uniswap.org/) let you swap tokens without a company in the middle. A smart contract holds the liquidity and executes trades automatically - **Lending & Borrowing:** Protocols like [Aave](https://aave.com/) let you lend your crypto and earn interest, or borrow against your holdings. No bank, no credit check, no paperwork — just code - **Stablecoins:** DAI, for example, is a dollar-pegged stablecoin maintained entirely by smart contracts. No bank account backing it — just overcollateralized crypto and automated liquidations - **DAOs:** Decentralized Autonomous Organizations are basically companies run by smart contracts. Token holders vote on proposals, and the code executes the decisions This is what makes Ethereum special. It's not just a cryptocurrency — it's a **platform** that other things get built on. Bitcoin is the asset. Ethereum is the ecosystem. ## Ethereum's Roadmap: The Scaling Problem Ethereum's biggest challenge has always been scale. The base layer processes about 15-30 transactions per second. Visa does 65,000. That gap is why gas fees spike and why the network gets congested. The roadmap to fix this is ambitious: - **Layer 2 rollups** (already live): Networks like Arbitrum, Optimism, and Base process transactions off the main chain but inherit Ethereum's security. They've already reduced fees from dollars to cents for most users - **[Proto-Danksharding (EIP-4844)](https://ethereum.org/en/roadmap/danksharding/):** Shipped in [March 2024 via the Dencun upgrade](https://ethereum.org/en/roadmap/dencun/), this introduced "blob" transactions — a new, cheaper way for Layer 2s to post data back to Ethereum. It slashed L2 fees by another 10-100x - **[Full Danksharding](https://ethereum.org/en/roadmap/danksharding/#what-is-danksharding):** The end goal. This will massively increase Ethereum's data capacity, enabling Layer 2s to scale to thousands of transactions per second at negligible cost The vision is clear: Ethereum itself becomes the **settlement layer** — the ultimate source of truth — while Layer 2s handle the day-to-day transactions. You'll use Ethereum without even knowing it, just like you use TCP/IP without thinking about it when you browse the web. > **The big picture:** Ethereum isn't trying to process every transaction itself. It's trying to be the most secure, decentralized foundation that everything else builds on top of. Layer 2s are the future of Ethereum scaling, and they're already here. ## The Bottom Line Ethereum took Bitcoin's breakthrough — trustless, decentralized value transfer — and generalized it into trustless, decentralized *anything*. Smart contracts are the building block, the EVM is the engine, and gas is the fuel. Is Ethereum perfect? Far from it. Gas fees can still be painful on the base layer. The learning curve is steep. And the roadmap is years from completion. But the ecosystem it's spawned — DeFi, NFTs, DAOs, Layer 2s, stablecoins — is unmatched in crypto. It's where the builders are, where the liquidity is, and where the innovation happens. Whether Ethereum stays the dominant smart contract platform forever is an open question. Competition is fierce, and some of those competitors have very different ideas about how a blockchain should work. ## What's Next **In Part 5**, we'll look at **Solana** — the blockchain that threw out Ethereum's playbook and bet everything on speed. Different architecture, different tradeoffs, different culture. If Ethereum is the reliable sedan, Solana is the sports car with the engine exposed. Let's pop the hood.
← [Previous: Wallets, Keys & Self-Custody](/blog/crypto-unlocked-03-wallets-keys-self-custody) · [Series Index](/blog/series/crypto-unlocked) · [Next: Solana — Speed at Scale](/blog/crypto-unlocked-05-solana-speed-at-scale) →
--- --- # Crypto Unlocked Part 5: Solana — Speed at Scale URL: /blog/crypto-unlocked-05-solana-speed-at-scale Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Solana, Blockchain, DeFi, Beginners Series: Crypto Unlocked (Part 5 of 21) --- Solana processes thousands of transactions per second at a fraction of a cent. Here's how it works and why it became Ethereum's biggest competitor. Imagine you're at a toll booth. One lane. Thousands of cars. Every driver has to stop, pay, get a receipt, and wait for the barrier to lift before the next car can go. That's Ethereum — secure, reliable, but slow. Now imagine a highway with thousands of lanes, cars flying through at 200 mph, paying tolls wirelessly without ever slowing down. That's the promise of [Solana](https://solana.com). In [Part 4](/blog/crypto-unlocked-04-ethereum-smart-contracts), we explored Ethereum and why it's the backbone of decentralized applications. But we also touched on its Achilles' heel: speed and cost. Ethereum handles about 15 transactions per second (TPS). Visa handles around 65,000. For crypto to go mainstream, *something* had to bridge that gap. Enter Solana — the blockchain that said "what if we just made it fast?" ## The Speed Problem (And Why It's Hard to Fix) Here's the fundamental tension in blockchain design: **security, decentralization, and speed — pick two.** This is called the "blockchain trilemma," and every chain makes different tradeoffs. Ethereum chose security and decentralization. The result? Rock-solid, trustworthy, but expensive and slow. When demand spikes, gas fees can hit $50+ for a simple swap. Solana's bet was different. What if you could get all three — but lean *hard* into speed while still keeping things secure enough? The founder, Anatoly Yakovenko, was a former Qualcomm engineer. He didn't come from crypto culture. He came from building systems that process data at telecom scale. And he brought that mindset to blockchain. The result: Solana can handle **thousands of transactions per second** with finality in about 400 milliseconds. Fees? Usually a fraction of a cent. We're talking $0.00025 per transaction. ![TPS comparison across major blockchains](/assets/blog/crypto-unlocked-05/tps-comparison.jpg) Let that sink in. On Ethereum, a token swap might cost you $5-50. On Solana, it costs less than a thousandth of a penny. ## Proof of History: Solana's Secret Sauce Every blockchain needs a way to agree on the order of transactions. Bitcoin and Ethereum do this through consensus — validators communicate back and forth to agree on what happened and when. It works, but all that communication takes time. Solana's innovation is called **[Proof of History (PoH)](https://solana.com/solana-whitepaper.pdf)**. Think of it like this: > Imagine a factory assembly line. Instead of workers stopping to check with each other about what order to do things, there's a giant clock on the wall that everyone can see. Each worker timestamps their task and moves on. No meetings. No debates. Just look at the clock and keep working. That's PoH in a nutshell. It's a cryptographic clock — a verifiable sequence of hashes that proves that time has passed between events. Validators don't need to talk to each other to agree on *when* things happened. The timestamps are baked right into the data. This alone doesn't make Solana fast. It's one of **eight innovations** working together (including [Tower BFT](https://solana.com/developers) for consensus, Gulf Stream for transaction forwarding, and Turbine for block propagation). But PoH is the foundation that makes everything else possible. > **Beginner tip:** You don't need to understand the technical details of PoH to use Solana. Just know that it's the reason transactions feel instant and cost almost nothing. The engineering is doing the heavy lifting so you don't have to think about it. ## What Sub-Second Finality Actually Means For You Let's get practical. Here's what Solana's speed means in real life: - **Swapping tokens** feels like using a normal app. Click, confirm, done. No waiting 15 seconds for a block, no praying your transaction doesn't get stuck. - **NFT minting** can handle thousands of people minting simultaneously without the network grinding to a halt (most of the time — more on that later). - **DeFi trading** on Solana feels closer to a centralized exchange. Limit orders, instant fills, real-time price updates. - **Micropayments** become viable. When a transaction costs $0.00025, you can send someone a penny without the fee being 100x the amount. - **Gaming and social apps** can put actions on-chain that would be absurdly expensive on Ethereum. This isn't just a nice-to-have. It unlocks entirely new categories of applications that simply can't exist on slower chains. ## The Solana Ecosystem: Who's Building Here? Solana has attracted a massive ecosystem. Here are the heavy hitters: - **[Jupiter](https://jup.ag)** — The go-to swap aggregator, self-described as "The DeFi Superapp." Think of it as Solana's Google for finding the best token prices. It routes your trade across multiple exchanges to get you the best deal. Jupiter has become *the* DeFi hub on Solana. - **[Raydium](https://raydium.io)** — One of the first major decentralized exchanges (DEXs) on Solana. It pioneered the AMM (automated market maker) model on the chain and remains a cornerstone of Solana DeFi. - **[Marinade Finance](https://marinade.finance)** — Liquid staking for SOL. Stake your SOL, get mSOL in return, and keep using that mSOL in DeFi while earning staking rewards. Best of both worlds. - **[Jito](https://www.jito.network)** — MEV (maximal extractable value) infrastructure and liquid staking. More advanced, but important for how the network's economics work under the hood. - **[Tensor](https://tensor.trade)** — The leading NFT marketplace on Solana. Fast, trader-friendly, with real-time floor price tracking and instant listings. - **[Magic Eden](https://magiceden.io)** — Started on Solana, expanded to other chains. One of the biggest NFT marketplaces in crypto. ![The Solana ecosystem](/assets/blog/crypto-unlocked-05/solana-ecosystem.jpg) And hundreds more — from gaming studios to payment processors to social platforms like the decentralized Twitter alternatives being built on Solana's infrastructure. ## The Saga Phone and Going Mobile-First In April 2023, [Solana Mobile](https://solanamobile.com) — a subsidiary of Solana Labs — did something no other blockchain had seriously attempted: they shipped a phone. The **Saga** was an Android device with a built-in crypto wallet, a seed vault for secure key storage, and a dApp store. The first Saga was... a tough sell at $1,000. Sales were slow. Critics called it a gimmick. Then a massive BONK airdrop to Saga holders made the phone worth more than its retail price overnight, and suddenly they were selling on eBay for $2,000+. The **Seeker** (Saga's successor) learned from this. More affordable, better specs, and the promise of exclusive token drops and app experiences. The thesis is bold: **crypto needs to be mobile-native**, not just mobile-compatible. Most of the world accesses the internet through phones, not laptops. If crypto wants mass adoption, it needs to meet people where they are. Whether or not Solana's phone strategy succeeds, the thinking behind it is sound. And the pre-orders for Seeker suggest plenty of people are betting it will. Beyond phones, Solana is making inroads with traditional finance. In September 2023, [Visa announced](https://usa.visa.com/about-visa/newsroom.html) it had added Solana blockchain support for sending USDC stablecoin payments to merchants — a significant vote of confidence from one of the world's largest payment networks. ## The Rocky Road: Outages, FTX, and the Comeback Let's address the elephant in the room. Solana's history isn't all speed records and smooth sailing. **The outages.** Between September 2021 and late 2022, Solana experienced multiple network outages — full stops where the chain just... didn't work. The first major one in September 2021 lasted 17 hours after a transaction surge caused the network to fork. In 2022, there were at least three more: a seven-hour bot-induced shutdown in May, a four-and-a-half-hour bug-related outage later that month, and a six-hour consensus bug in October. For a chain marketing itself as the future of high-speed finance, going offline is about the worst look possible. **The FTX connection.** Sam Bankman-Fried's [FTX](https://en.wikipedia.org/wiki/FTX) and Alameda Research were massive Solana backers. FTX alone held $982 million in SOL tokens, and it was Alameda's second-largest holding. When FTX collapsed in November 2022, SOL's price dropped 40% in a single day, eventually sliding from ~$35 to under $10. The entire ecosystem was painted with the FTX brush. Many wrote Solana's obituary. **The comeback.** And yet, here we are. SOL recovered and then some. The network stabilized — outages became rare, then almost nonexistent. The developer community didn't leave. New projects kept launching. The FTX estate's SOL holdings were gradually sold without crashing the market. And Solana emerged from the bear market arguably *stronger* than it went in. > **The lesson here?** In crypto, narratives shift fast. The "dead chain" of 2022 became the hottest ecosystem of 2024-2025. Don't write off a project based on a single chapter of its story. ## SOL Tokenomics and Staking SOL is Solana's native token. Here's what you need to know: - **Initial supply:** ~500 million SOL at launch - **Inflation:** SOL has an inflationary model, starting at ~8% annually and decreasing by 15% each year until it reaches a long-term rate of ~1.5% - **Staking yield:** Stakers earn rewards from this inflation. Current yields hover around 6-7% APY - **Burn mechanism:** 50% of all transaction fees are burned (destroyed), creating some deflationary pressure - **Use cases:** Pay transaction fees, stake for network security, governance participation Staking SOL is straightforward. You can delegate to a validator directly through wallets like [Phantom](https://phantom.app) or [Solflare](https://solflare.com). Or use liquid staking ([Marinade](https://marinade.finance), [Jito](https://www.jito.network)) to keep your SOL productive while it's staked. > **Beginner tip:** If you're holding SOL and not staking it, you're leaving money on the table. Liquid staking through [Marinade](https://marinade.finance) or [Jito](https://www.jito.network) lets you earn staking rewards *and* use your staked SOL in DeFi simultaneously. ## Solana vs Ethereum: Different Beasts, Different Tradeoffs This isn't a war. It's a design spectrum. Here's how to think about it: - **Speed** — Ethereum: ~15 TPS (base layer) · Solana: ~4,000+ TPS - **Fees** — Ethereum: $0.50 - $50+ · Solana: $0.00025 - **Finality** — Ethereum: ~12 minutes · Solana: ~400ms - **Decentralization** — Ethereum: ~900,000+ validators · Solana: ~1,800 validators - **Hardware requirements** — Ethereum: Consumer laptop · Solana: High-end server - **Philosophy** — Ethereum: Decentralization first · Solana: Performance first Ethereum's validator set is massive and can run on modest hardware. That's deeply decentralized. Solana validators need beefy machines (high RAM, fast CPUs, enterprise-grade internet). That means fewer validators and a more centralized network. Is that a problem? Depends who you ask. Ethereum maxis say yes — decentralization is the whole point. Solana believers argue that 1,800 validators is *plenty* decentralized, and that nobody cares about decentralization if the network is too slow and expensive to use. The truth? **Both are right.** Different applications need different tradeoffs. Settling a $10 million institutional trade? You probably want Ethereum's battle-tested security. Buying a coffee with crypto? You want Solana's speed and near-zero fees. The future isn't one chain to rule them all. It's horses for courses. ## The Memecoin Explosion: Pump.fun and the Solana Casino No honest Solana article can skip this. In 2024, Solana became ground zero for the memecoin explosion, largely thanks to **[pump.fun](https://pump.fun)** — a platform that let anyone launch a token in seconds with minimal cost. The trend reached a fever pitch in January 2025 when US President Donald Trump launched his own [$TRUMP memecoin](https://en.wikipedia.org/wiki/$TRUMP) on Solana, briefly pushing SOL to a new all-time high of $294. The result was absolute chaos. Thousands of tokens launched daily. Some made people rich overnight. Most went to zero in hours. Dog coins, cat coins, political coins, coins based on typos — if you could think of it, someone already launched it. Was this good for Solana? It's complicated: - **The bull case:** It proved Solana's tech works at scale. Millions of transactions, millions of users, the network handled it. It onboarded a massive wave of new users who'd never touched DeFi before. - **The bear case:** It attracted scammers, rug pulls, and speculation that made crypto look like a casino. The "Solana is for gambling" narrative wasn't exactly the brand the foundation wanted. Love it or hate it, the memecoin era proved something important: **when you make transactions fast and cheap, people will actually use blockchain.** Whether they use it for Nobel Prize-worthy innovations or dog-themed gambling tokens... well, that's humanity for you. ## The Big Picture Solana represents a different philosophy in crypto. Where Ethereum says "we'll scale carefully, layer by layer, never compromising on decentralization," Solana says "let's make this thing fast enough that regular people actually want to use it, and figure out the rest as we go." Both approaches have merit. Both have risks. And both are pushing the entire industry forward. If you're new to crypto, Solana is worth exploring. Set up a [Phantom wallet](https://phantom.app), grab some SOL, try a swap on [Jupiter](https://jup.ag), mint an NFT on [Tensor](https://tensor.trade). The experience will feel remarkably different from Ethereum — and that contrast will teach you more about blockchain tradeoffs than any article can. ## What's Next? We've now covered Bitcoin, Ethereum, and Solana — three very different blockchains with three very different philosophies. But here's the thing: none of them exist in isolation. In **[Part 6: The Multi-Chain World](/blog/crypto-unlocked-06-multi-chain-world)**, we'll explore how these chains connect, what bridges and interoperability actually mean, and why the future of crypto isn't about picking a winner — it's about all of them working together. See you there. 🔗
← [Previous: Ethereum & Smart Contracts](/blog/crypto-unlocked-04-ethereum-smart-contracts) · [Series Index](/blog/series/crypto-unlocked) · [Next: The Multi-Chain World](/blog/crypto-unlocked-06-multi-chain-world) →
--- --- # Crypto Unlocked Part 6: The Multi-Chain World URL: /blog/crypto-unlocked-06-multi-chain-world Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Layer 2, Blockchain, Arbitrum, Beginners Series: Crypto Unlocked (Part 6 of 21) --- There isn't going to be one blockchain to rule them all. Here's why we have hundreds of chains and how Layer 2s, sidechains, and app-chains all fit together. One of the first questions people ask when they start exploring crypto is: "Which blockchain is *the* blockchain?" It's a reasonable question. We have one internet, one TCP/IP protocol, one HTTP standard. Surely there'll be one blockchain that wins and the rest will fade away, right? Nope. Not even close. There are hundreds of active blockchains today, and that number is growing. This isn't a bug — it's a feature. Different chains make different tradeoffs, and those tradeoffs matter depending on what you're trying to do. A chain optimized for high-frequency trading looks nothing like one optimized for storing land records. In this part, we're going to untangle the multi-chain world. Layer 1s, Layer 2s, rollups, app-chains, bridges — by the end, you'll understand why this zoo of chains exists and how they all fit together. ## The Blockchain Trilemma: Pick Two Before we look at specific chains, you need to understand *why* there are so many. It comes down to a concept called the **[blockchain trilemma](https://vitalik.eth.limo/general/2021/04/07/sharding.html)**, popularized by Ethereum co-founder Vitalik Buterin. ![The Blockchain Trilemma — Decentralization, Security, Scalability: pick two](/assets/blog/crypto-unlocked/blockchain-trilemma.jpg) Every blockchain tries to achieve three things: - **Decentralization** — lots of independent nodes running the network, so no single entity controls it - **Security** — extremely hard to attack, manipulate, or shut down - **Scalability** — can process lots of transactions quickly and cheaply Here's the catch: **you can only optimize for two out of three.** It's like the old project management joke — fast, cheap, or good: pick two. - **Ethereum** prioritizes decentralization and security. Result? It's slow and expensive during high demand. - **[Solana](https://solana.com/)** prioritizes security and scalability. Result? Fewer validators, meaning it's more centralized (and has had several outages). - **BNB Chain** prioritizes scalability and security. Result? Only 21 validators — basically a corporate blockchain wearing a decentralization costume. None of these are "wrong." They're just different tradeoffs for different use cases. And that's exactly why we have a multi-chain world. ## Layer 1s: The Foundation Chains A **Layer 1** (L1) is a standalone blockchain with its own consensus mechanism, its own validators, and its own security. Think of L1s as independent countries — each with its own laws, currency, and infrastructure. We covered Bitcoin and Ethereum in earlier parts. Here are some other major L1s worth knowing: ### Avalanche (Subnets) Avalanche takes an interesting approach: instead of forcing everyone onto one chain, it lets developers create **subnets** — essentially custom blockchains that plug into the Avalanche ecosystem. A gaming company can spin up a subnet optimized for gaming. A bank can create a private subnet with compliance rules baked in. Think of it like franchise restaurants. They all share the Avalanche brand and infrastructure, but each location can customize its menu. ### [Cosmos](https://cosmos.network/) (The Internet of Blockchains) Cosmos doesn't even try to be one chain. Its whole philosophy is: "Every application should have its own blockchain." These are called **app-chains**, and they communicate with each other through a protocol called **[IBC](https://ibcprotocol.dev/)** (Inter-Blockchain Communication). If Avalanche is a franchise, Cosmos is more like the European Union — sovereign nations that agreed on shared trade protocols so goods (tokens) can flow freely between them. ### [Polkadot](https://polkadot.com/) (Parachains) Polkadot uses a central **Relay Chain** that provides shared security to connected chains called **parachains**. Each parachain can be customized for a specific purpose, but they all benefit from the Relay Chain's security. Think of it as an airport hub. Each parachain is like a terminal with its own airlines and destinations, but they all share the same air traffic control and runway system. ### BNB Chain Built by Binance, the world's largest crypto exchange. BNB Chain is fast and cheap, but achieves this by being quite centralized — only 21 validators, all essentially approved by Binance. It's popular for DeFi and gaming because transactions cost fractions of a cent. > **Real talk:** BNB Chain is perfectly fine for experimenting and small transactions. Just understand that "decentralized" is doing some heavy lifting when 21 Binance-approved nodes run the whole show. ## Layer 2s: Building on Top of Ethereum Here's where things get really interesting. Instead of building a whole new blockchain from scratch, what if you could build *on top of* an existing secure chain? That's exactly what **Layer 2s** (L2s) do. They process transactions off Ethereum's main chain (which is slow and expensive), but periodically post proof of those transactions back to Ethereum. This means they **inherit Ethereum's security** while being much faster and cheaper. Imagine Ethereum is a busy courthouse. Every transaction is a legal case that needs to go through the full court process — expensive and slow. Layer 2s are like arbitration services. They handle the disputes quickly and cheaply, but the final ruling is still backed by the authority of the courthouse. [code block] You can track L2 ecosystem health, TVL, and risk assessments on [L2Beat](https://l2beat.com/scaling/summary) — the go-to dashboard for Layer 2 data. There are two main flavors of L2 rollups: [code block] ### Optimistic Rollups: Trust, But Verify Optimistic rollups **assume transactions are valid** (hence "optimistic") and only check them if someone raises a challenge. There's a **challenge period** — usually about 7 days — during which anyone can say "Hey, that transaction was fraudulent!" and prove it. The major optimistic rollups: - **[Arbitrum](https://arbitrum.io/)** — The biggest L2 by total value locked. Huge DeFi ecosystem. If you're using DeFi on a budget, you're probably on Arbitrum. - **[Optimism](https://www.optimism.io/)** — Pioneer of the optimistic rollup design. Runs the "[Superchain](https://www.optimism.io/superchain)" vision where multiple chains share its technology. - **[Base](https://base.org/)** — Built by Coinbase using Optimism's technology. It's become the go-to chain for consumer apps, onboarding millions of users who may not even realize they're using crypto infrastructure. > **Tip:** When you withdraw from an optimistic rollup back to Ethereum, that 7-day challenge period applies to you. Your funds will be locked for about a week. Plan accordingly, or use a third-party bridge for faster (but slightly more expensive) exits. ### ZK Rollups: Prove It Mathematically ZK (Zero-Knowledge) rollups take a different approach. Instead of assuming everything is fine and waiting for challenges, they generate a **mathematical proof** that every transaction in a batch is valid. This proof is then posted to Ethereum. It's like the difference between an exam where the teacher spot-checks random answers (optimistic) versus one where you show all your work and the teacher can verify it instantly (ZK). The major ZK rollups: - **[zkSync](https://zksync.io/)** — One of the earliest ZK rollups to launch. Focuses on low fees and developer tooling. - **[StarkNet](https://www.starknet.io/)** — Uses a different type of proof called STARKs (vs. SNARKs used by others). More complex but potentially more future-proof and doesn't require a trusted setup ceremony. - **[Scroll](https://scroll.io/)** — Aims to be the most Ethereum-compatible ZK rollup, making it easy for developers to port existing Ethereum apps. **So which is better — Optimistic or ZK?** Right now, optimistic rollups are more mature and have bigger ecosystems. But most people in the industry believe ZK rollups are the long-term winner because the math is more elegant — no challenge periods, instant finality, and potentially better privacy features. The tech is just harder to build, so it's taking longer to mature. ## App-Chains: Your Own Personal Blockchain Sometimes a project has such specific needs that even a general-purpose L2 isn't good enough. Enter **app-chains**: blockchains built for a single application. The best example is **Hyperliquid**, a perpetual futures exchange that built its own Layer 1 blockchain from scratch. Why? Because a trading platform needs sub-second latency and custom order matching logic that general-purpose chains can't provide. By controlling the entire chain, Hyperliquid can optimize every millisecond. Other examples include **dYdX** (which moved from Ethereum to its own Cosmos app-chain) and various gaming projects that need cheap, fast transactions without competing with DeFi traders for block space. Think of app-chains like private roads. Public highways (general-purpose chains) work great for most people, but if you're Amazon and you're running thousands of delivery trucks, it might make sense to build your own logistics network. ## Bridges: Moving Between Worlds With all these different chains, you need a way to move assets between them. That's what **bridges** do. Want to move your ETH from Ethereum to Arbitrum? You use a bridge. Want to move USDC from Ethereum to Avalanche? Bridge. [code block] Here's how a basic bridge works: 1. You deposit tokens into a smart contract on Chain A 2. The bridge protocol verifies your deposit 3. Equivalent tokens are minted or released on Chain B 4. You now have your assets on the new chain Simple in theory. Terrifying in practice. > ⚠️ **Warning:** Bridges are the single biggest point of failure in the multi-chain world. Over **$2.5 billion** has been stolen from bridge hacks. The [Ronin bridge hack](https://rekt.news/ronin-rekt/) alone lost $624 million. The [Wormhole hack](https://rekt.news/wormhole-rekt/): $326 million. The [Nomad hack](https://rekt.news/nomad-rekt/): $190 million. Why are bridges so risky? Because they're essentially giant honeypots. A bridge holds millions (sometimes billions) of dollars in locked assets, and if a hacker can trick the bridge into releasing those funds, it's game over. You're also trusting the bridge's validators or smart contracts to be bulletproof — and history says they often aren't. **Practical advice:** - Use official bridges when possible (Arbitrum Bridge, Optimism Bridge, etc.) - For cross-chain moves, consider going through a major exchange instead — deposit on Chain A, withdraw on Chain B - Don't leave large amounts sitting in bridge contracts - Stick to well-audited bridges with long track records ## Chain Abstraction: The Future Here's the honest truth: normal people should *never have to think about which chain they're on.* The fact that you need to manually switch networks, bridge assets, and pay different gas tokens on different chains is a terrible user experience. **Chain abstraction** is the industry's answer. The idea is to build a layer of smart infrastructure that handles all the multi-chain complexity behind the scenes. You just say "I want to swap this token" or "I want to buy this NFT," and the system figures out the optimal chain, bridges your assets, and executes the transaction — all in one click. Projects working on this include: - **Particle Network** — universal accounts that work across chains - **Socket / Bungee** — aggregates bridges and DEXs across chains - **Near's chain signatures** — control accounts on any chain from one Near account - **[ERC-4337](https://eips.ethereum.org/EIPS/eip-4337) (Account Abstraction)** — not chain abstraction exactly, but a building block that makes smart wallets possible We're not fully there yet, but it's getting closer. Base is a great example of partial success — millions of people use apps built on Base without ever knowing (or caring) that they're on an Ethereum L2. The endgame looks a lot like the internet today. You don't think about which server hosts a website, which CDN delivers the images, or which DNS provider resolves the domain. You just type a URL and it works. Blockchains will get there too. ## Pulling It All Together Let's zoom out. Here's the multi-chain landscape in one mental model: - **Layer 1s** (Ethereum, Solana, Avalanche, Cosmos chains) = Independent countries with their own rules and security - **Layer 2s** (Arbitrum, Optimism, Base, zkSync) = States/provinces within a country (Ethereum), sharing its security but running their own operations - **App-chains** (Hyperliquid, dYdX) = Private corporate campuses with their own infrastructure, connected to the broader world - **Bridges** = International airports and border crossings — necessary but occasionally dangerous - **Chain abstraction** = The future passport-free travel zone where borders become invisible There will never be "one chain to rule them all," and that's okay. The internet isn't one server. The financial system isn't one bank. A multi-chain world is the natural outcome of different needs requiring different solutions. The key is making all these chains work together so seamlessly that users never have to think about it. We're heading there, one bridge and one rollup at a time. ## What's Next? We've talked about the infrastructure — the chains, the layers, the connections. But what actually *lives* on these chains? In **[Part 7](/blog/crypto-unlocked-07-tokens-and-standards)**, we'll dive into **tokens and token standards** — what ERC-20, ERC-721, and ERC-1155 actually mean, how tokens are created, and why not all tokens are created equal. If you've ever wondered what makes a "shitcoin" different from a "legit" token (technically speaking), that one's for you.
← [Previous: Solana — Speed at Scale](/blog/crypto-unlocked-05-solana-speed-at-scale) · [Series Index](/blog/series/crypto-unlocked) · [Next: Tokens & Standards](/blog/crypto-unlocked-07-tokens-and-standards) →
--- --- # Crypto Unlocked Part 7: Tokens & Standards URL: /blog/crypto-unlocked-07-tokens-and-standards Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Tokens, NFTs, Stablecoins, Beginners Series: Crypto Unlocked (Part 7 of 21) --- ERC-20, NFTs, stablecoins, memecoins — the different types of crypto tokens, how they're created, and what gives them value. Right now, as you read this sentence, someone somewhere is launching a new crypto token. It might be the next billion-dollar protocol. It might be a dog wearing a hat. It might be an outright scam. Welcome to the wild, wonderful, occasionally terrifying world of tokens. If you've been following this series, you know about blockchains, wallets, and smart contracts. Now it's time to understand the things that actually *live* on those blockchains — the tokens. Because "crypto" isn't just Bitcoin and Ethereum. There are hundreds of thousands of tokens out there, and understanding what they are (and aren't) is the difference between navigating this space and getting wrecked by it. ## Coins vs. Tokens: The Difference Actually Matters First, let's clear up something that confuses almost everyone. A **coin** is the native currency of its own blockchain. Bitcoin (BTC) runs on the Bitcoin network. Ether (ETH) runs on Ethereum. Solana (SOL) runs on Solana. These are coins — they're baked into the DNA of their chain. You need them to pay transaction fees, and they're what miners or validators earn as rewards. A **token**, on the other hand, is built *on top of* an existing blockchain using smart contracts. Think of it this way: if Ethereum is a shopping mall, then ETH is the currency the mall itself operates in, but tokens are all the individual gift cards, loyalty points, and arcade coins that the shops inside create. USDC? That's a token on Ethereum (and other chains). Shiba Inu? Token. Uniswap's UNI? Token. They all rely on Ethereum's infrastructure to exist, but they each have their own purpose and value. > **Key insight:** Every coin has its own blockchain. Every token borrows someone else's. ## ERC-20: The Standard That Launched a Thousand Tokens Back in November 2015, a developer named [Fabian Vogelsteller](https://github.com/frozeman) (with input from Vitalik Buterin) proposed a simple idea: what if all tokens on Ethereum followed the same basic rules? What if every token could be sent, received, and checked the same way? That proposal became **[ERC-20](https://eips.ethereum.org/EIPS/eip-20)** (Ethereum Request for Comments #20), and it changed everything. Before ERC-20, every token was a snowflake — unique code, unique behavior, a nightmare for wallets and exchanges to support. ERC-20 said: "Here's a template. Your token must be able to do these things": - **Transfer** tokens from one address to another - **Check the balance** of any address - **Approve** another address to spend tokens on your behalf - Report the **total supply** That's basically it. A shared interface. And because every ERC-20 token speaks the same language, any wallet that supports one supports them all. Any exchange that lists one can list another. Any DeFi protocol can plug into any token. This standard is what powered the ICO (Initial Coin Offering) boom of 2017. Suddenly, anyone could create a token in an afternoon and raise millions. Some projects were legitimate. Many were not. But the standard itself was — and remains — brilliant engineering. ![Ethereum Token Standards — ERC-20, ERC-721, and ERC-1155 compared](/assets/blog/crypto-unlocked/token-standards-overview.svg) ## Stablecoins: The Dollar, But On-Chain If you've ever looked at crypto prices and thought "I want off this roller coaster for a bit," stablecoins are your exit ramp. They're tokens designed to maintain a stable value, usually pegged to $1 USD. But not all stablecoins are created equal. How they *maintain* that peg is where things get interesting — and sometimes dangerous. ### Fiat-Backed (Centralized) **[USDC](https://www.circle.com/usdc)** (Circle) and **[USDT](https://tether.to/)** (Tether) are the big two. The idea is simple: for every token in circulation, there's a real dollar (or equivalent) sitting in a bank account somewhere. You want to redeem your USDC for actual dollars? Circle says "sure, here you go." - **USDC:** Transparent, audited regularly, the "good boy" of stablecoins - **USDT:** Larger market cap (~$140B+), murkier reserves, perpetually controversial — but stubbornly dominant. You can track stablecoin market caps in real time on [DefiLlama](https://defillama.com/stablecoins). ### Crypto-Collateralized (Decentralized) **[DAI](https://makerdao.com/)** (MakerDAO, now rebranded as Sky) takes a different approach. There's no bank account. Instead, users lock up *more* crypto than the DAI they mint. Want to create 100 DAI? You might need to lock up $150 worth of ETH as collateral. If your collateral drops in value, the system liquidates it automatically. It's more complex, but it's truly decentralized. No company can freeze your DAI. ### Algorithmic (Here Be Dragons) And then there's the algorithmic approach, where code alone tries to maintain the peg using supply-and-demand mechanics. No reserves. No collateral. Just math and incentives. This is what **[Terra/UST](https://en.wikipedia.org/wiki/Terra_(blockchain))** tried. For a while, it worked. UST held its peg — helped by the [Anchor Protocol](https://en.wikipedia.org/wiki/Terra_(blockchain)#Anchor_Protocol) offering ~19.5% yields — and its companion token LUNA soared to an all-time high of $119.51. Then on May 9, 2022, it didn't work. UST lost its peg, LUNA entered a death spiral, and **~$45 billion in market cap evaporated in a week**. People lost their life savings. Some lost more than that. Terraform Labs [filed for bankruptcy](https://en.wikipedia.org/wiki/Terra_(blockchain)) in January 2024, and founder Do Kwon was later extradited to face fraud charges. > **The lesson:** When someone promises you a "stable" asset backed by nothing but an algorithm and vibes, be extremely skeptical. If the peg mechanism relies on confidence alone, it only works until it doesn't. ![Stablecoin approaches compared — fiat-backed, crypto-collateralized, and algorithmic](/assets/blog/crypto-unlocked/stablecoin-types.svg) ## NFTs: Non-Fungible Tokens Everything we've covered so far — ERC-20s, stablecoins, memecoins — are **fungible**. One USDC equals any other USDC. But what about unique digital items? A piece of art, a concert ticket, a domain name? That's where **NFTs (Non-Fungible Tokens)** come in. They use different standards — [ERC-721](https://eips.ethereum.org/EIPS/eip-721) for unique items and [ERC-1155](https://eips.ethereum.org/EIPS/eip-1155) for mixed collections — to represent one-of-a-kind digital ownership on-chain. NFTs got a wild reputation during the 2021 boom (and the crash that followed), but the underlying technology of provable digital ownership is genuinely powerful — far beyond profile pictures. > We dedicate the entire [next chapter](/blog/crypto-unlocked-08-nfts-beyond-jpegs) to NFTs: the boom, the bust, the real use cases, and why the technology still matters. Don't skip it. ## Memecoins: The Casino Is Open Let's talk about the elephant — or rather, the dog — in the room. **Dogecoin** started as a literal joke in 2013. It now has a market cap of billions. **Shiba Inu** was created as a "Dogecoin killer" and also became a multi-billion dollar asset. **PEPE** rode the frog meme to a peak market cap over $1 billion. **WIF** (dogwifhat) is a dog wearing a hat. That's it. That's the thesis. Memecoins have no utility, no technology, no roadmap. They have *culture*. They have *community*. And they have the raw, unfiltered energy of people who'd rather gamble on a cartoon frog than buy index funds. Platforms like **[pump.fun](https://pump.fun/)** on Solana made it possible to launch a memecoin in under a minute for a few dollars. This spawned an explosion of token launches — thousands per day — most going to zero within hours, a few making early buyers absurdly rich. > **Real talk:** Memecoins are gambling. Some people win big. Most people lose. If you play this game, only use money you genuinely don't care about losing. The house — meaning the early insiders — almost always wins. ## SPL Tokens: Solana's Take Everything I've described so far has been Ethereum-centric, but Ethereum isn't the only game in town. **Solana** has its own token standard called **SPL** (Solana Program Library). SPL tokens work conceptually the same as ERC-20 tokens — fungible, transferable, standardized — but they benefit from Solana's speed and low fees. Creating and transferring SPL tokens costs fractions of a cent, which is exactly why Solana became the home of the memecoin explosion. When launching a token costs almost nothing, people launch a *lot* of tokens. Other chains have their own standards too: BEP-20 on BNB Chain, TRC-20 on Tron, and so on. The concept is always the same. The implementation details differ. ## Anyone Can Create a Token (And That's Terrifying) Here's something that blows people's minds: **you could create your own token right now**. With some basic tools and a few dollars in gas fees on Ethereum (or pennies on Solana), you could deploy a token called JoeCoin and have a million units in your wallet by lunchtime. This is genuinely powerful. It means any project, community, or creator can launch a token without asking permission from a bank, a government, or a tech company. It's financial Lego. It's also genuinely dangerous. Because if *anyone* can create a token, then scammers can too. And they do. Constantly. Common traps include: - **Rug pulls:** Creator launches token, hypes it up, then drains all the liquidity and disappears - **Honeypots:** You can buy the token but the code prevents you from selling - **Fake tokens:** A token named "Ethereum 2.0" that has nothing to do with Ethereum ## How Tokens Launch Not all token launches look the same: - **Fair launch:** No pre-mine, no insider allocations. Everyone gets in at the same time. Bitcoin is the original fair launch. In practice, truly fair launches are rare. - **Presale / ICO / IDO:** Early investors buy tokens before public launch, usually at a discount. The project raises funds to build. The risk? Those early investors often dump on retail the moment trading opens. - **Airdrops:** Free tokens distributed to early users of a protocol. [Uniswap's UNI airdrop](https://uniswap.org/) gave ~$1,500 worth of tokens to anyone who'd used the platform. These can be life-changing — or worthless. > **Tip:** When evaluating any token launch, check the **token distribution**. If 50% of the supply goes to the team and insiders, you're the exit liquidity. Look at vesting schedules, total supply, and who holds the biggest bags. ## What Actually Gives a Token Value? This is the million-dollar question. A token is just code. What makes one worth $0.000001 and another worth $1,000? **Utility.** Does the token *do* something? ETH pays for gas. LINK pays [Chainlink](https://chain.link/) oracle operators. Tokens with real demand from real usage have a floor. **Governance.** Some tokens let you vote on protocol decisions. UNI holders govern [Uniswap](https://app.uniswap.org/). If the protocol controls billions in value, having a say in its direction is worth something. **Speculation.** Let's be honest — most token value is driven by people betting the price will go up. There's nothing inherently wrong with this (stocks work similarly), but speculation without substance is a house of cards. **Scarcity.** Fixed supply plus growing demand equals higher prices. Bitcoin's 21 million cap is the ultimate example. **Network effects.** The more people use a token, the more useful it becomes, the more people want it. This flywheel is what separates tokens that last from tokens that don't. In practice, most tokens derive their value from some combination of all five. The healthiest tokens have strong utility *and* speculation. The most dangerous have only speculation. ## What's Next Now that you understand what tokens are — how they're created, what standards they follow, and what gives them value — it's time to explore one of the most visible (and controversial) use cases for those standards. In **[Part 8](/blog/crypto-unlocked-08-nfts-beyond-jpegs)**, we're diving into **NFTs** — non-fungible tokens. They're way more than overpriced JPEGs. From digital art and gaming assets to real-world ownership and identity, NFTs represent a fundamental shift in how we think about digital property. See you there.
← [Previous: The Multi-Chain World](/blog/crypto-unlocked-06-multi-chain-world) · [Series Index](/blog/series/crypto-unlocked) · [Next: NFTs — Beyond JPEGs](/blog/crypto-unlocked-08-nfts-beyond-jpegs) →
--- --- # Crypto Unlocked Part 8: NFTs — Beyond the JPEGs URL: /blog/crypto-unlocked-08-nfts-beyond-jpegs Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, NFTs, Digital Art, Gaming, Web3, Beginners Series: Crypto Unlocked (Part 8 of 21) --- NFTs aren't just overpriced monkey pictures. From digital art and gaming to real estate deeds and concert tickets — non-fungible tokens are reshaping how we think about ownership. In March 2021, a digital artist named Mike Winkelmann — better known as Beeple — sold a single JPEG file at [Christie's auction house](https://www.christies.com/features/monumental-collage-by-beeple-is-first-purely-digital-artwork-nft-to-come-to-auction-11510-7.aspx) for **$69.3 million**. The art world lost its mind. Twitter lost its mind. Your uncle who barely uses email suddenly had opinions about blockchain. And just like that, three letters entered the global vocabulary: NFT. Two years later, the floor prices of most NFT collections had collapsed over 90%. People were holding digital apes worth less than the gas fees they paid to mint them. The obituaries were written. "NFTs are dead," declared approximately everyone. So... are they? Not even close. But to understand why, you need to look past the JPEGs. ## What Does "Non-Fungible" Actually Mean? Let's start with the word nobody can pronounce at dinner parties. **Fungible** means interchangeable. A dollar bill is fungible — your $20 bill works exactly the same as my $20 bill. We can swap them and neither of us cares. Bitcoin is fungible too (mostly). One BTC equals one BTC. **Non-fungible** means unique. Your house is non-fungible. Even if your neighbor's house has the same floor plan, yours has that weird stain on the ceiling and a slightly bigger garden. They're not interchangeable. Your concert ticket for Row A, Seat 12 is not the same as Row Z, Seat 47, even though they're for the same show. An **NFT (Non-Fungible Token)** is simply a unique digital record on a blockchain that proves you own a specific thing. That "thing" could be an image, a music file, a game item, a domain name, a deed to property, or a ticket to an event. The token itself doesn't *contain* the thing — it's more like a certificate of authenticity that points to it. > **Key insight:** NFTs aren't a type of art. They're a type of *ownership*. The art bubble was just the first (loudest) application. ![A futuristic digital art gallery showcasing NFT artworks with neon frames and holographic displays](/assets/blog/crypto-unlocked-08/nft-digital-gallery.jpg) ## How NFTs Work Under the Hood In [Part 7](/blog/crypto-unlocked-07-tokens-and-standards), we covered ERC-20 — the standard for fungible tokens where every unit is identical. NFTs use different standards: - **[ERC-721](https://eips.ethereum.org/EIPS/eip-721):** The original NFT standard on Ethereum. Every token has a unique ID. One token = one specific item. This is what CryptoPunks, Bored Apes, and most "1-of-1" art collections use. - **[ERC-1155](https://eips.ethereum.org/EIPS/eip-1155):** A multi-token standard that supports *both* fungible and non-fungible tokens in a single contract. Perfect for gaming — you might have 1,000 identical health potions (fungible) and one legendary sword (non-fungible) managed by the same smart contract. More gas-efficient for batch operations. - **SPL Tokens on Solana:** Solana doesn't separate fungible and non-fungible standards the way Ethereum does. Instead, an NFT is simply an SPL token with a supply of exactly one and zero decimal places. The [Metaplex](https://www.metaplex.com/) protocol adds the metadata layer on top. Different architecture, same concept. ### The Metadata Problem There's a fundamental problem with how NFTs actually work. When you "own" an NFT, you own a token on the blockchain that contains a token ID and a pointer — usually a URL — to the actual content. The image of your ape? That's almost certainly *not* stored on-chain (storing images on Ethereum would cost thousands of dollars in gas). Instead, most NFTs point to off-chain storage: - **[IPFS](https://ipfs.tech/) (InterPlanetary File System):** A decentralized file network where content is addressed by its hash. As long as at least one node pins the file, it stays available. Decent, but not bulletproof — if nobody pins it, it disappears. - **[Arweave](https://www.arweave.org/):** Permanent storage where you pay once and data is stored forever (theoretically). More reliable, but more expensive upfront. - **Regular web servers:** Some NFTs literally point to a company's AWS bucket. If the company goes bankrupt or the server shuts down? Congratulations, you own a token that points to a 404 page. > **Warning:** Before buying any NFT, check where the metadata lives. If it's on a regular web server, your "permanent digital ownership" is only as permanent as someone paying the hosting bill. A small number of projects store everything on-chain (like [Art Blocks](https://www.artblocks.io/) generative art, which stores the code to regenerate the artwork directly on Ethereum). These are genuinely permanent but rare. [code block] ## The 2021 Boom: Digital Gold Rush To understand NFTs today, you have to understand what happened in 2021. It was a perfect storm: - **Beeple's $69.3M sale** at Christie's legitimized NFTs overnight. Traditional art world meets crypto — front page everywhere. - **[CryptoPunks](https://cryptopunks.app/)** — 10,000 pixel-art characters originally given away for free in 2017 — started selling for millions. Visa bought one. Jay-Z used one as his profile picture. (Now owned by [Yuga Labs](https://yuga.com/), which acquired the IP from Larva Labs in 2022.) - **[Bored Ape Yacht Club](https://boredapeyachtclub.com/)** (BAYC) launched on April 30, 2021 at 0.08 ETH (~$190) per ape — selling out in 12 hours. By early 2022, floor prices hit 100+ ETH (~$300,000+). Celebrities from Justin Bieber to Snoop Dogg bought in. The "club" aspect — exclusive Discord access, commercial rights to your ape — added a layer of community and identity. By 2024, floor prices had dropped roughly 90% from their peak. - **NBA Top Shot**, **Axie Infinity**, **Art Blocks** — the space exploded in every direction. Monthly NFT trading volume hit $5 billion+ in January 2022. Money was flowing. FOMO was raging. Everyone with a Photoshop license launched a collection. Most were derivative garbage. But hey, number go up. ## The Crash: 90% Down and Then Some What goes up in a speculative frenzy must come down. And it came down *hard*. By 2023, the vast majority of NFT collections had lost 90-95% of their peak value. Many went to zero. [A study by dappGambl](https://dappgambl.com/nfts/dead-nfts/) estimated that over 95% of NFT collections had effectively zero market value. OpenSea's monthly volume dropped from billions to tens of millions. What happened? Everything you'd expect: - **Oversupply:** Tens of thousands of collections flooded the market. Most had no utility, no community, no reason to exist. - **Speculation unwind:** People weren't buying art — they were buying lottery tickets. When the music stopped, there weren't enough chairs. - **Macro environment:** Rising interest rates killed speculative assets across the board. Crypto winter didn't spare NFTs. - **Wash trading:** A significant portion of NFT volume turned out to be fake — people trading with themselves to inflate prices and farm marketplace token airdrops. > **Reality check:** The crash didn't prove NFTs are useless. It proved that *speculating on profile pictures* is not a business model. The technology underneath survived just fine. ## NFT Marketplaces: The Royalty Wars Where you buy and sell NFTs matters — and the marketplace landscape has been a bloodbath. - **[OpenSea](https://opensea.io/):** The original dominant marketplace. Founded in 2017, it rode the 2021 wave to a [$13.3 billion valuation in January 2022](https://techcrunch.com/2022/01/04/opensea-raises-300-million-at-13-3-billion-valuation/). But it got complacent and slow to innovate — daily volume cratered 99% from its May 2022 peak by late that year. - **[Blur](https://blur.io/):** Launched in October 2022 and ate OpenSea's lunch almost overnight. Blur targeted pro traders with faster execution, zero fees, and an aggressive token airdrop campaign. By early 2023, it had overtaken OpenSea in volume. - **[Magic Eden](https://magiceden.io/):** Started as the top Solana marketplace, then expanded to Ethereum, Bitcoin Ordinals, and other chains. Now one of the biggest cross-chain NFT platforms. - **[Tensor](https://tensor.trade/):** Solana's pro-trading NFT platform, essentially the Blur of Solana. Fast, data-rich, and popular with active traders. The real drama? **Royalties.** Originally, NFT creators earned a percentage (typically 5-10%) every time their work resold — a revolutionary concept for artists. But Blur and others made royalties optional to attract traders. OpenSea was forced to follow. Creators lost a massive revenue stream, and the debate still rages: should on-chain royalties be enforceable, or is the market right to reject them? It's a genuine tension. Enforced royalties are great for creators but can feel like a tax to traders. The market voted with its wallets, and creators mostly lost. ## Real Use Cases Beyond Art Here's where NFTs get interesting again. Forget the monkeys. The technology — a verifiable, unique digital token proving ownership — is genuinely useful for a lot of things: ![NFT utility beyond art — tickets, gaming items, domain names, and credentials flowing from a blockchain hub](/assets/blog/crypto-unlocked-08/nft-utility-web.png) ### 🎮 Gaming Imagine actually *owning* your in-game items instead of renting them from a game company that can ban your account or shut down servers. - **[Immutable X](https://www.immutable.com/):** An Ethereum layer-2 built specifically for gaming NFTs. Zero gas fees for trading. Powers games like **[Gods Unchained](https://godsunchained.com/)** (a trading card game where you actually own your cards) and **[Illuvium](https://illuvium.io/)**. - **[Ronin](https://roninchain.com/):** The chain behind Axie Infinity. Despite a [$620 million hack in March 2022](https://www.bbc.com/news/technology-60933174) — attributed to North Korea's Lazarus Group — the gaming ecosystem continues to develop. - The promise: you buy a sword in one game, use it in another, sell it when you're done. We're not there yet, but the infrastructure is being built. ![Gaming NFT marketplace cards showing items like 3D Shooter characters, controllers, and weapons priced in BNB](/assets/blog/crypto-unlocked-08/nft-gaming-items.jpg) ### 🎵 Music Artists have been screwed by the music industry forever. NFTs offer a way to sell directly to fans without Spotify taking 70%. - Platforms like **Sound.xyz** pioneered music NFTs — letting fans buy limited editions of songs, earning bragging rights and sometimes royalties. The music NFT space continues to evolve with new platforms and models. - The real potential: artists retaining ownership and building direct economic relationships with their audience. No label, no distributor, no 15% cut for the streaming platform. ### 🎫 Ticketing Paper tickets get counterfeited. Digital tickets get scalped. NFT tickets solve both problems. - **[GET Protocol](https://onopen.xyz/)** (now the OPEN Ticketing Ecosystem): Has processed millions of NFT tickets for real events. The ticket is an NFT — verifiable, non-duplicable, and programmable. The artist can even earn a cut of resale. - Every major ticketing company is exploring this. [Ticketmaster has experimented](https://business.ticketmaster.com/) with token-gated experiences. It's a matter of when, not if. ### 🌐 Domain Names - **[ENS (Ethereum Name Service)](https://ens.domains/):** Turns your wallet address (0x7a3B...4f2e) into something human-readable like `yourname.eth`. These are NFTs you own and control — with millions of names registered to date. Vitalik Buterin uses `vitalik.eth`. - **[Unstoppable Domains](https://unstoppabledomains.com/):** Similar concept with `.crypto`, `.nft`, `.wallet` domains. No renewal fees — buy once, own forever. ### 🏠 Real Estate Tokenizing property ownership as NFTs enables fractional ownership — buying 1% of an apartment building instead of the whole thing. Still early, and regulatory hurdles are massive, but experiments are happening in multiple jurisdictions. ### 🪪 Identity & Credentials - **Soulbound Tokens (SBTs):** Non-transferable NFTs that represent credentials, achievements, or identity attributes. Your university degree as an SBT — verifiable on-chain, can't be faked, can't be sold to someone else. - **POAPs (Proof of Attendance Protocol):** NFTs that prove you were at an event, attended a conference, or participated in a community moment. Digital collectible badges that actually verify presence. ## PFP Culture: When JPEGs Became Identity One phenomenon worth understanding: **PFP (Profile Picture) culture**. During 2021-2022, your NFT *was* your digital identity. Having a CryptoPunk or Bored Ape as your Twitter avatar signaled membership in a tribe — wealth, early adoption, community belonging. This sounds absurd until you realize people already do this with fashion. A Rolex, a Supreme hoodie, a vintage car — they're all identity signals. PFP NFTs were the digital-native version. Your Bored Ape said "I was early, I'm part of this club, I have skin in the game." The cultural moment faded with the prices, but the *concept* of blockchain-verified digital identity hasn't. It's just looking for its next form. ## Why NFTs Still Matter Let me be blunt: most NFTs were (and are) worthless. The 2021 mania was driven by speculation, hype, and greater-fool economics. If you bought a random collection hoping it'd be the next BAYC, you probably lost money. That's the honest truth. But the *technology* — unique digital ownership verified by a public blockchain — is not going away. Consider: - **Digital property rights are inevitable.** As more of our lives happen online, we'll need verifiable ownership of digital goods. NFTs are the best mechanism we have. - **The internet lacks a native ownership layer.** You can copy a JPEG, sure. But you can't copy the blockchain record that proves who bought it, who owns it, and its entire transaction history. That distinction matters more as digital goods get more valuable. - **Creator monetization is broken.** Artists, musicians, game developers — they all get squeezed by platforms and intermediaries. NFTs offer a path (imperfect, still developing) to direct creator-to-consumer economics. - **Interoperability.** An NFT isn't locked to one platform. Your ENS name works across wallets, dApps, and services. Your game item could (theoretically) move between games. This composability is uniquely enabled by open blockchain standards. The bubble popped. Good. Bubbles always pop. But the railroads built during the railroad bubble still carried trains afterward. The websites launched during the dot-com bubble still served pages. And the ownership infrastructure built during the NFT bubble will still verify ownership when the next wave of real applications arrives. > **My take:** We're in the "trough of disillusionment" for NFTs. The hype tourists left. What remains are builders creating actual utility. That's where the real story begins. ## What's Next We've covered what you can *own* on a blockchain — coins, tokens, and now unique digital assets. But what can you *do* with them? In **[Part 9: DeFi Fundamentals](/blog/crypto-unlocked-09-defi-fundamentals)**, we're diving into the world of Decentralized Finance — lending, borrowing, trading, and earning yield, all without a bank, a broker, or a permission slip. This is where crypto starts replacing actual financial infrastructure. Let's go.
← [Previous: Tokens & Standards](/blog/crypto-unlocked-07-tokens-and-standards) · [Series Index](/blog/series/crypto-unlocked) · [Next: DeFi Fundamentals](/blog/crypto-unlocked-09-defi-fundamentals) →
--- --- # Crypto Unlocked Part 9: DeFi Fundamentals URL: /blog/crypto-unlocked-09-defi-fundamentals Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, DeFi, Lending, Aave, Beginners Series: Crypto Unlocked (Part 9 of 21) --- Decentralized finance rebuilt banking without banks. Lending, borrowing, and earning yield — all controlled by code, not corporations. Imagine walking into a bank. There's no teller. No manager. No security guard asking you for two forms of ID and your mother's maiden name. Instead, there's a vending machine. You put crypto in, and it lends, borrows, earns interest, or swaps assets for you — instantly, 24/7, with no one's permission required. That's [DeFi](https://en.wikipedia.org/wiki/Decentralized_finance). And it's not a thought experiment. It's been running since 2020, handling billions of dollars, and it never takes a lunch break. ## What Is DeFi, Actually? DeFi stands for **Decentralized Finance**. Strip away the buzzword and it means: financial services built on smart contracts instead of institutions. Remember smart contracts from [Part 5](/blog/crypto-unlocked-05-solana-speed-at-scale)? Self-executing code that lives on a blockchain. DeFi takes that concept and builds an entire banking system on top of it. Lending, borrowing, trading, insurance — all of it, running on code that anyone can inspect and no single company controls. Traditional finance (TradFi, if you want the lingo) works because you trust *institutions*. You trust your bank won't lose your money. You trust the stock exchange to settle trades fairly. You trust regulators to keep everyone honest. DeFi replaces that trust with **transparency**. The code is open source. The rules are enforced by math. And your money is controlled by your wallet, not someone else's database. ## TradFi vs. DeFi: A Side-by-Side Let's make this concrete: - **Opening an account:** TradFi needs your ID, proof of address, credit check, 3-5 business days. DeFi needs a wallet. Takes 30 seconds. - **Getting a loan:** TradFi requires credit history, income verification, weeks of waiting. DeFi requires collateral and one transaction. Minutes. - **Earning interest:** TradFi gives you 0.5% if you're lucky. DeFi rates fluctuate but often run 2-8% (sometimes much more, sometimes less). - **Operating hours:** TradFi is 9-5, Monday to Friday, closed on holidays. DeFi is 24/7/365. Christmas Day at 3 AM? The smart contracts don't care. - **Access:** TradFi requires citizenship, residency, sometimes minimum deposits. DeFi is permissionless — if you have an internet connection and a wallet, you're in. > **The tradeoff:** DeFi gives you freedom but also full responsibility. No customer support hotline. No fraud protection. No "forgot my password" reset. You are the bank. ## Lending and Borrowing: The Core of DeFi The killer app of DeFi is surprisingly boring: lending and borrowing. The same thing banks have done for centuries, but without the bank. Here's how it works on platforms like **[Aave](https://aave.com)** or **[Compound](https://compound.finance)**: [code block] **If you want to earn interest:** 1. You deposit your crypto (say, ETH or stablecoins) into a lending pool — a smart contract that holds everyone's deposits together 2. The protocol lends your crypto out to borrowers 3. You earn interest, paid by those borrowers 4. You can withdraw anytime — no lock-up, no penalty **If you want to borrow:** 1. You deposit collateral (crypto you already own) 2. The protocol lets you borrow *different* crypto against that collateral 3. You pay interest on what you borrowed 4. When you're done, you repay the loan plus interest and get your collateral back Simple, right? But there's a catch that confuses everyone at first. ## Why Do You Need $150 to Borrow $100? This is the question that trips up every newcomer: "If I already have crypto, why would I borrow more?" In DeFi, loans are **overcollateralized**. You need to deposit more value than you borrow. Typically 150% or more. So to borrow $100 worth of stablecoins, you'd need to lock up $150 worth of ETH. "That's insane. Why not just sell the ETH?" Great question. Here's why it makes sense: - **You're bullish on ETH.** You think ETH will go up, so you don't want to sell. But you need cash *now*. So you borrow stablecoins against your ETH, spend those, and when ETH moons, you repay the loan and still have your ETH (now worth more). - **Tax efficiency.** In many jurisdictions, selling crypto triggers a taxable event. Borrowing against it doesn't. You keep your position and access liquidity. - **Leverage.** Some people borrow stablecoins, use them to buy *more* ETH, deposit that ETH, borrow more... and ride the leverage loop up (or get destroyed on the way down). The overcollateralization exists because **there's no credit check**. The smart contract doesn't know if you're a whale or a teenager. It only knows the collateral you've locked. If you could borrow $100 by depositing $100, there'd be no buffer if prices drop — and the lenders would get wrecked. > **Think of it like a pawnshop.** You leave your watch worth $150, they give you $100 cash. If you never come back, they still have the watch. The lender is always protected. ## Liquidation: When Things Go Wrong Here's where it gets serious. Your collateral is crypto, and crypto is volatile. What happens when the price drops? Let's say you deposited $150 of ETH and borrowed $100 of USDC. If ETH drops 30%, your collateral is now worth $105. That's dangerously close to your loan amount. The protocol has a **liquidation threshold** — usually around 80-85% loan-to-value. Cross it, and the smart contract automatically sells (liquidates) your collateral to repay the loan. No warning phone call. No extension. The code executes. You'll get back whatever's left after the loan is repaid and the liquidation penalty is deducted. But you'll lose a chunk of your collateral. **Liquidation is the single biggest risk in DeFi lending.** It happens automatically, instantly, and the market doesn't care that you were asleep when ETH dropped 20% at 4 AM. > **Pro tip:** If you use DeFi lending, keep your loan-to-value ratio well below the liquidation threshold. Borrow less than you're allowed to. Give yourself a buffer. And monitor your positions — or use tools that alert you when you're getting close. ## Where Does the Yield Come From? This is the right question to ask. When someone promises you 5% or 10% or 50% returns, your first thought should be: *who is paying for this?* In legitimate DeFi lending, the answer is straightforward: - **Borrowers pay interest.** That interest gets distributed to lenders. Supply and demand set the rate. - **Higher demand to borrow = higher rates for lenders.** When everyone wants to borrow a particular asset, the interest rate goes up. - **Lower demand = lower rates.** Simple market dynamics. The yield isn't magic. It's not printed out of thin air. Someone is paying to use your capital. Same as a bank — except you're getting a much bigger cut because there's no bank in the middle taking a fat margin. > **Red flag:** If you can't figure out where the yield comes from, *you* are the yield. This rule has saved people millions. If a protocol offers 100% APY and can't explain why, run. ## TVL: The Scoreboard of DeFi **Total Value Locked (TVL)** is the most-watched metric in DeFi. It measures how much money is deposited across all DeFi protocols — the total collateral sitting in smart contracts. At DeFi's peak in November 2021, TVL across all chains hit approximately [**$178 billion**](https://defillama.com/). During the bear market of 2022-2023, it dropped below $40 billion. It's a rough thermometer for how much capital trusts DeFi enough to participate. As of early 2025, [Aave alone holds over $30 billion in TVL](https://defillama.com/protocol/aave) — making it the single largest DeFi protocol. [Compound](https://defillama.com/protocol/compound-finance), one of the pioneers, sits around $1.8 billion. You can track TVL on sites like [DeFiLlama](https://defillama.com/) — it breaks down by protocol, by chain, and over time. When TVL rises, it generally means confidence and adoption are growing. When it drops, people are pulling capital out (or getting liquidated). ![DeFi Total Value Locked over time — from near zero to $178 billion and back](/assets/blog/crypto-unlocked-09/defi-tvl-chart.jpg) ## Flash Loans: The Craziest Innovation in Finance Okay, here's where DeFi gets truly wild. A [**flash loan**](https://docs.aave.com/faq/flash-loans) lets you borrow *millions of dollars* with **zero collateral**. No credit check. No deposit. Nothing. The catch? You have to borrow and repay within a **single transaction**. One atomic blockchain transaction. If you can't repay, the entire transaction reverts — like it never happened. The lender loses nothing. "Why would anyone need to borrow millions for a fraction of a second?" Arbitrage. If ETH is trading at $2,000 on one exchange and $2,010 on another, you can: 1. Flash-borrow $2 million 2. Buy 1,000 ETH on the cheap exchange 3. Sell it on the expensive exchange for $2,010,000 4. Repay the loan ($2,000,000 + small fee) 5. Pocket the profit All in one transaction. All in about 12 seconds. This is something that was **physically impossible** before DeFi. Flash loans democratized arbitrage — you don't need to be a hedge fund with millions in capital. Of course, flash loans have also been used for exploits. Attackers have manipulated [oracle](https://chain.link/) prices, drained liquidity pools, and pulled off multi-million dollar heists using flash-borrowed funds. The [rekt.news leaderboard](https://rekt.news/leaderboard/) tracks the biggest DeFi exploits — billions lost in total across hacks like the [Ronin Bridge ($624M)](https://rekt.news/ronin-rekt/), [Wormhole ($326M)](https://rekt.news/wormhole-rekt/), and countless flash loan attacks. The tool is neutral — the usage isn't always. ## DeFi Summer 2020: The Big Bang DeFi existed before 2020, but it was niche. A handful of protocols, a few hundred million in TVL, mostly used by Ethereum developers. Then **[Compound](https://compound.finance)** launched its [COMP governance token](https://www.comp.xyz/) in June 2020. They started distributing tokens to anyone who lent or borrowed through the protocol — a concept called **liquidity mining**. Suddenly, you weren't just earning interest. You were earning *tokens* on top, which themselves had value. The math was insane. Early participants were earning triple-digit APYs. Word spread. Capital flooded in. Other protocols launched their own tokens and incentive programs. **[Yearn Finance](https://yearn.fi/)**, **[SushiSwap](https://www.sushi.com/)**, **[Curve](https://curve.fi/)** — new protocols popped up weekly. TVL exploded from ~$1 billion in June 2020 to over $15 billion by the end of the year. People called it **DeFi Summer**, and it changed crypto forever. It proved that decentralized financial services could attract serious capital and that code could coordinate billions without a CEO. ## The Risks Are Real DeFi isn't a free lunch. If you're going to play in this space, respect the risks: - **Smart contract bugs.** Code can have vulnerabilities. [Billions have been lost to exploits](https://rekt.news/leaderboard/). Just because a contract is audited doesn't mean it's bulletproof. - **Oracle manipulation.** DeFi protocols need price data (from [oracles like Chainlink](https://chain.link/)). If an attacker manipulates the price feed, they can trick the protocol into bad trades or unfair liquidations. - **Rug pulls.** A developer launches a protocol, attracts deposits, then drains the smart contract and disappears. More common with unaudited, anonymous projects. - **Impermanent loss.** (We'll cover this in [Part 10](/blog/crypto-unlocked-10-dexs-liquidity-pools), but it's real.) - **Regulatory risk.** Governments are still figuring out how to regulate DeFi. Rules could change and impact protocols you're using. - **Composability risk.** DeFi protocols build on each other like LEGO blocks. If one piece breaks, everything stacked on top can collapse. This is sometimes called "DeFi contagion." > **The golden rule of DeFi:** Never deposit more than you can afford to lose. Start small. Use established protocols ([Aave](https://aave.com), [Compound](https://compound.finance), [Sky (formerly MakerDAO)](https://sky.money/)). Read the docs. And don't chase unsustainable yields — if it sounds too good to be true, the smart contract doesn't care about your feelings when it liquidates you. ## Key Takeaways - **DeFi = financial services without intermediaries**, built on smart contracts - **Lending and borrowing** are the foundation — deposit crypto to earn, or borrow against collateral - **Overcollateralization** protects lenders because there are no credit checks - **Liquidation** happens automatically when your collateral value drops too low - **Yield comes from borrowers** — if you can't identify the source, you are the source - **TVL** measures how much capital is locked in DeFi protocols - **Flash loans** enable zero-collateral borrowing within a single transaction - **DeFi Summer 2020** was the breakout moment that proved the concept at scale - **The risks are real** — bugs, exploits, and rug pulls have cost billions ## What's Next You now understand DeFi's banking layer — lending, borrowing, and earning yield. But there's a whole other side: **trading without an exchange**. In [Part 10](/blog/crypto-unlocked-10-dexs-liquidity-pools), we'll dive into **decentralized exchanges (DEXs)** and **liquidity pools**. How does Uniswap let you trade tokens without an order book? What are liquidity providers, and why do they sometimes lose money? And what the hell is an automated market maker? It's where DeFi gets really interesting. See you there.
← [Previous: NFTs — Beyond JPEGs](/blog/crypto-unlocked-08-nfts-beyond-jpegs) · [Series Index](/blog/series/crypto-unlocked) · [Next: DEXs & Liquidity Pools](/blog/crypto-unlocked-10-dexs-liquidity-pools) →
--- --- # Crypto Unlocked Part 10: DEXs & Liquidity Pools URL: /blog/crypto-unlocked-10-dexs-liquidity-pools Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, DEX, Liquidity Pools, Uniswap, AMM, Beginners Series: Crypto Unlocked (Part 10 of 21) --- How decentralized exchanges work without order books, what liquidity pools are, and why impermanent loss is the price you pay for being the house. What if I told you there's a type of exchange that runs 24/7, never asks for your ID, can't freeze your funds, has no CEO, no employees, and does billions of dollars in daily volume — all powered by a math formula that fits on a napkin? Welcome to the world of decentralized exchanges. This is where DeFi gets *really* interesting. ## The Old Way: Order Books To understand why DEXs are revolutionary, you need to understand what they replaced. Traditional exchanges — whether it's the NYSE or [Binance](https://www.binance.com/) — use **[order books](https://www.investopedia.com/terms/o/order-book.asp)**. Think of it like a marketplace noticeboard: - **Buyers** post what they want to buy and at what price ("I'll buy 1 ETH for $3,000") - **Sellers** post what they want to sell ("I'll sell 1 ETH for $3,010") - When a buyer's price meets a seller's price, a trade happens This works great... when you have millions of traders creating a thick, liquid market. But what if you want to trade some obscure token at 3 AM? There might be nobody on the other side of your trade. The order book is thin, spreads are wide, and you're stuck. More importantly, order books need someone to *run* the matching engine. That someone is a centralized company. With your money. And your data. ## The New Way: Automated Market Makers In November 2018, a former mechanical engineer named Hayden Adams launched **[Uniswap](https://uniswap.org/)** and changed everything — inspired by a [blog post by Vitalik Buterin](https://vitalik.eth.limo/general/2017/06/22/marketmakers.html) on automated market makers. Instead of matching buyers with sellers, Uniswap introduced the **Automated Market Maker (AMM)** — a smart contract that *is* the market. Here's the core idea: instead of an order book, you have a **liquidity pool**. It's a smart contract holding a pile of two tokens. Anyone can trade against that pile, and a simple math formula determines the price. That formula? **x × y = k** (the [constant product formula](https://docs.uniswap.org/contracts/v2/concepts/protocol-overview/glossary#constant-product-formula)) That's it. That's the revolution. [code block] Let me break it down: - **x** = the amount of Token A in the pool - **y** = the amount of Token B in the pool - **k** = a constant (the product of x and y must always stay the same) Say a pool has 10 ETH and 30,000 USDC. That means k = 300,000. If you want to buy ETH, you add USDC to the pool and remove ETH — but the product must remain 300,000. The math automatically adjusts the price based on how much is in the pool. > **The simple version:** The more of a token people buy from the pool, the more expensive it gets. The more they sell into the pool, the cheaper it gets. Supply and demand, enforced by math. No company. No employees. No downtime. Just a smart contract doing multiplication. (In practice, Uniswap applies a [0.30% fee](https://docs.uniswap.org/contracts/v2/concepts/protocol-overview/how-uniswap-works) to each trade which slightly increases *k* over time — that's how LPs get paid.) ## How Prices Actually Work Let's trace through a real example. Our pool has: - **10 ETH** and **30,000 USDC** (k = 300,000) - The implied price of ETH is 30,000 ÷ 10 = **$3,000** You want to buy 1 ETH. You need to add enough USDC so that after removing 1 ETH, the constant holds: - After your trade: 9 ETH × ? USDC = 300,000 - ? = 33,333 USDC - You need to add 33,333 - 30,000 = **3,333 USDC** for 1 ETH Wait — that's $3,333, not $3,000! The price moved *during your trade*. That's because you're buying a significant chunk of the pool (10% of all the ETH). This price impact is called **slippage**, and it's more noticeable in smaller pools or with larger trades. > **💡 This is why big pools matter.** A pool with 10,000 ETH and 30,000,000 USDC would barely budge on a 1 ETH trade. More liquidity = less slippage = better prices for traders. ## Becoming the House: Liquidity Providers So who puts the tokens in the pool? **Liquidity Providers (LPs)** — regular people like you and me. Here's the deal: you deposit an equal value of two tokens into a pool. In return, you earn a cut of every single trade that happens in that pool. On [Uniswap v2](https://docs.uniswap.org/contracts/v2/overview), that's a flat 0.3% of each swap, distributed proportionally to all LPs. (Uniswap v3 and v4 offer [multiple fee tiers](https://docs.uniswap.org/concepts/protocol/fees) — 0.01%, 0.05%, 0.3%, and 1% — so pools can match their fee to the pair's volatility.) Think of it like owning a tiny piece of a currency exchange booth at the airport. Every time someone swaps euros for dollars, you get a slice. You're literally *being the house*. When you deposit tokens, you receive **LP tokens** (sometimes called "receipt tokens") that represent your share of the pool. These are like a claim ticket. Want your tokens back? Burn the LP tokens and withdraw your share — including any fees you've earned. > **🧾 LP tokens are real tokens** — you can hold them, transfer them, and in many DeFi protocols, stake them for additional rewards. This is where "yield farming" comes from (more on that in [Part 11](/blog/crypto-unlocked-11-advanced-defi)). ## The Catch: Impermanent Loss Here's the part nobody talks about until it's too late. **[Impermanent loss](https://www.youtube.com/watch?v=8XJ1MSTEuU0)** is the single most important concept for anyone thinking about providing liquidity. ![Impermanent loss visualized — comparing holding vs providing liquidity as token price rises, showing the growing gap in value](/assets/blog/crypto-unlocked-10/impermanent-loss.jpg) Let me explain with a simple example: **Scenario:** You provide liquidity to an ETH/USDC pool when ETH is $3,000. You deposit 1 ETH + 3,000 USDC (total value: $6,000). **What happens next:** ETH goes to $4,000. Great news, right? **If you had just held:** 1 ETH ($4,000) + 3,000 USDC = **$7,000** **What you actually have in the pool:** Due to the constant product formula, arbitrage traders have rebalanced your position. You now have roughly 0.866 ETH + 3,464 USDC = **$6,928** **Your impermanent loss:** $7,000 - $6,928 = **$72** (about 1%) You still made money compared to your initial $6,000 — but you made *less* than if you'd just held the tokens in your wallet. The pool constantly rebalances your holdings, selling your winners and buying your losers. It's called "impermanent" because if ETH goes back to $3,000, the loss disappears. But if you withdraw while prices have diverged, the loss becomes very permanent. > **⚠️ The rule of thumb:** Impermanent loss hurts most when the two tokens in your pair diverge significantly in price. Pairs of correlated assets (like USDC/USDT or wETH/stETH) have much lower impermanent loss. The question every LP needs to answer: **Do the trading fees I earn outweigh my impermanent loss?** Sometimes yes, sometimes very much no. (Tools like [dailydefi.org's IL calculator](https://dailydefi.org/tools/impermanent-loss-calculator/) can help you model different scenarios before you commit.) ## Concentrated Liquidity: The Uniswap v3 Upgrade Classic AMMs spread your liquidity across all possible prices — from $0 to infinity. That's wildly inefficient. Most of your capital sits in price ranges that will never be used. (Uniswap's own data showed the v2 DAI/USDC pair utilized only [~0.50% of its capital](https://docs.uniswap.org/concepts/protocol/concentrated-liquidity) for trades in the $0.99–$1.01 range where virtually all volume occurred.) **[Uniswap v3](https://docs.uniswap.org/concepts/protocol/concentrated-liquidity)** introduced **concentrated liquidity**: you choose the price range where you want to provide liquidity. ![Concentrated liquidity — providing liquidity within a specific price range instead of spreading it from zero to infinity](/assets/blog/crypto-unlocked-10/concentrated-liquidity.jpg) For example, instead of covering ETH from $0 to $∞, you could say: "I'll provide liquidity for ETH between $2,500 and $3,500." Your capital is now concentrated in that range, earning *way* more fees per dollar deployed. The tradeoff? If the price moves outside your range, your position stops earning fees entirely and you're left holding 100% of the less valuable token. It's more capital efficient but requires active management. (Note: [Uniswap v4](https://docs.uniswap.org/contracts/v4/overview), launched in January 2025, keeps concentrated liquidity but adds a "hooks" system that lets developers customize pool behavior — auto-rebalancing, dynamic fees, and more.) > **Think of it like this:** Classic AMM = a fishing net spread across the entire ocean. Concentrated liquidity = fishing where the fish actually are. More efficient, but you need to know where to cast. ## Providing Liquidity: A Step-by-Step Walkthrough Here's what it actually looks like to become an LP: 1. **Choose your DEX** — [Uniswap](https://app.uniswap.org/) (Ethereum), [Raydium](https://raydium.io/) (Solana), [PancakeSwap](https://pancakeswap.finance/) (BNB Chain), etc. 2. **Connect your wallet** — MetaMask, Phantom, whatever fits your chain 3. **Pick a pool** — Usually sorted by trading volume and fee tier 4. **Deposit tokens** — You need equal *value* of both tokens (not equal amounts). If ETH is $3,000, you'd deposit 1 ETH + 3,000 USDC 5. **Set your price range** (if using v3-style concentrated liquidity) — Tighter range = more fees but higher risk of going out of range 6. **Confirm the transaction** — Pay gas, sign, done 7. **Receive LP tokens** — These represent your pool share 8. **Monitor and collect fees** — Some protocols auto-compound, others require manual claiming > **🔰 Start small.** Try a stablecoin pair first (USDC/USDT). Minimal impermanent loss, and you'll learn the mechanics without the stress of volatile assets moving against you. ## Slippage: Why Your Trade Price Isn't What You Expected We touched on this earlier, but it deserves its own callout. **Slippage** is the difference between the price you expect and the price you actually get. In an AMM, every trade moves the price. Small trades in big pools? Barely any slippage. But: - **Big trade + small pool** = significant slippage - **Volatile market + pending transaction** = price moves before your trade executes Most DEX interfaces let you set a **slippage tolerance** — say 0.5% or 1%. If the price moves more than that before your transaction confirms, the trade fails instead of giving you a bad deal. > **⚠️ Be careful with high slippage tolerance.** Setting it to 10%+ is an invitation for MEV bots to sandwich your trade (buying before you and selling after, pocketing the difference). Keep it as tight as you can. ## The Major AMMs You Should Know The AMM landscape is massive and multi-chain. Here are the ones that matter: - **[Uniswap](https://uniswap.org/)** — The OG. Ethereum mainnet + L2s (Arbitrum, Optimism, Base). The gold standard - **[Curve Finance](https://curve.fi/)** — Specialized in stablecoin and pegged-asset swaps. Uses a modified bonding curve ([StableSwap invariant](https://curve.fi/whitepaper)) for extremely low slippage on similar-value tokens - **[PancakeSwap](https://pancakeswap.finance/)** — The biggest DEX on BNB Chain. Lower fees, sometimes shadier tokens - **[Raydium](https://raydium.io/)** — Leading AMM on Solana. Fast, cheap, integrated with [OpenBook](https://www.openbook-solana.com/)'s on-chain order book (the community successor to Serum after FTX's collapse) - **[Orca](https://www.orca.so/)** — Another Solana favorite. Clean UX, concentrated liquidity via "Whirlpools" - **[Aerodrome](https://aerodrome.finance/)** — The dominant DEX on Base (Coinbase's L2). A Velodrome fork that's become the liquidity hub for the Base ecosystem Each chain tends to have a dominant DEX. When you're on that chain, that's usually where you'll find the deepest liquidity and best prices. ## Why This Matters DEXs and AMMs aren't just "decentralized Binance." They're something fundamentally new — permissionless financial infrastructure that anyone can build on, contribute to, or earn from. You can: - **Trade** any token the moment someone creates a pool for it (no listing process) - **Earn** passive income by providing liquidity - **Build** applications on top of existing pools - **Participate** in governance of these protocols The combination of AMMs, LP tokens, and composability is what makes DeFi a financial Lego set. And we've only scratched the surface. ## What's Next In **[Part 11](/blog/crypto-unlocked-11-advanced-defi)**, we're going deeper into the DeFi rabbit hole. We'll cover **lending and borrowing protocols** (Aave, Compound), **yield farming strategies**, **flash loans** (borrowing millions with no collateral — seriously), and how all these DeFi Legos snap together to create increasingly complex (and sometimes absurdly risky) financial products. If this part made you feel like the house, next part will show you how the house can also be the bank, the insurance company, and the hedge fund — all at once. See you there. 🏊‍♂️
← [Previous: DeFi Fundamentals](/blog/crypto-unlocked-09-defi-fundamentals) · [Series Index](/blog/series/crypto-unlocked) · [Next: Advanced DeFi](/blog/crypto-unlocked-11-advanced-defi) →
--- --- # Crypto Unlocked Part 11: Advanced DeFi URL: /blog/crypto-unlocked-11-advanced-defi Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, DeFi, Yield Farming, Staking, Beginners Series: Crypto Unlocked (Part 11 of 21) --- Yield farming, liquid staking, restaking, and the difference between real yield and ponzinomics. The DeFi rabbit hole goes deep. Welcome to the deep end of the pool. In [Part 10](/blog/crypto-unlocked-10-dexs-liquidity-pools), you learned the fundamentals — swapping tokens, providing liquidity, and lending. That was DeFi 101. Now we're going further. Yield farming, liquid staking, restaking, auto-compounding vaults, governance wars, and the uncomfortable question every DeFi participant eventually has to face: *is this real yield, or am I the exit liquidity?* This is where DeFi gets both incredibly powerful and incredibly dangerous. Let's dig in. ## Yield Farming: The Great APY Hunt Yield farming is exactly what it sounds like — you're a farmer, except instead of growing crops, you're growing returns by moving your capital across DeFi protocols to wherever the yield is highest. Here's how it typically works: 1. You deposit tokens into a protocol (a lending pool, a liquidity pool, a vault) 2. The protocol rewards you with its own governance token on top of any natural yield 3. You sell that token for more of what you deposited 4. Repeat Simple, right? In the 2020 "[DeFi Summer](https://www.coindesk.com/learn/what-is-defi/)," kicked off when [Compound Finance](https://compound.finance/) started distributing its COMP governance token to users in June 2020. Suddenly, people were earning **thousands of percent APY** by providing liquidity and lending. New protocols would launch, offer insane token rewards to attract liquidity, and farmers would pile in, harvest the rewards, dump the token, and move on to the next farm. [code block] > **💡 Key term:** APY (Annual Percentage Yield) includes compounding. APR (Annual Percentage Rate) doesn't. A 100% APR compounded daily is actually ~171% APY. Always check which one a protocol is showing you — some use APY to make numbers look bigger. ## Liquidity Mining: Getting Paid to Provide Liquidity Liquidity mining is a specific type of yield farming where a protocol pays you extra tokens for providing liquidity to its pools. It was the rocket fuel of DeFi's growth. Think of it like a new restaurant offering free meals to its first 100 customers. The restaurant (protocol) needs people in the seats (liquidity in the pools) to function, so it subsidizes them with rewards. Early customers get a great deal. But once the free meals stop... do people keep coming back? That's the billion-dollar question. **The yield farming meta** — deposit → earn token → sell token → repeat — has a fundamental problem: if everyone is farming a token just to sell it, who's buying? The protocol is essentially paying for liquidity with inflation. When the rewards dry up, the liquidity leaves, the token drops, and latecomers are left holding the bag. This doesn't mean all yield farming is bad. It means you need to understand *where the yield comes from*. ## Real Yield vs. Emissions: The Most Important Question in DeFi This is the single most important concept in this entire article. Maybe in all of DeFi. Every time you see an attractive APY, ask yourself one question: **Where does the money come from?** There are only two answers: - **Real yield:** The protocol generates actual revenue from fees, and shares that revenue with token holders or liquidity providers. The money comes from users paying for a service. - **Emissions:** The protocol prints its own token and hands it out as rewards. The money comes from... nowhere. It's inflation dressed up as yield. Here's how to spot the difference: - **Source** — Real Yield: Trading fees, interest, liquidations · Emissions: Newly minted protocol tokens - **Sustainable?** — Real Yield: Yes, if users keep using the protocol · Emissions: No, eventually rewards must decrease - **Examples** — Real Yield: GMX (fee sharing), Aave (interest income) · Emissions: Most farm tokens in 2020-2021 - **Red flag** — Real Yield: Low but steady APY (5-20%) · Emissions: Sky-high APY (500%+) that drops fast **[GMX](https://gmx.io/)** is the poster child of real yield. It's a decentralized perpetual exchange. Traders pay fees to trade. Those fees get distributed to GMX stakers and liquidity providers. The yield comes from actual economic activity — people trading. **[Aave](https://aave.com/)** generates real yield from borrowers paying interest. If you lend USDC on Aave, your interest comes from borrowers. That's real. Sustainable. Boring, even. And boring is good. > **🔑 Rule of thumb:** If a protocol can't explain where the yield comes from without mentioning its own token, be very skeptical. Real businesses generate revenue. Ponzi schemes generate tokens. ## Liquid Staking: Have Your Cake and Eat It Too Remember staking from earlier in this series? You lock up your ETH (or SOL, or other proof-of-stake tokens) to help secure the network, and you earn rewards. The problem? Your capital is locked. You can't do anything else with it. Liquid staking solves this elegantly. [code block] When you stake ETH through **[Lido](https://lido.fi/)**, you receive **stETH** (staked ETH) — a token that represents your staked ETH plus the staking rewards it's accumulating. You still earn the ~3-4% staking yield, but now you can *also* use stETH in DeFi: - Use stETH as collateral to borrow on Aave - Provide stETH liquidity on Curve - Deposit stETH in a vault for extra yield It's like getting a receipt for your deposit at the bank, except that receipt is itself money you can spend and invest. Other major liquid staking protocols: - **[Rocket Pool](https://rocketpool.net/) (rETH)** — more decentralized than Lido, anyone can run a node - **[Jito](https://www.jito.network/) (jitoSOL)** — liquid staking for Solana, with MEV rewards baked in - **[Marinade](https://marinade.finance/) (mSOL)** — another popular Solana liquid staking option > **⚠️ Risk alert:** Liquid staking tokens should trade at roughly 1:1 with the underlying asset, but they can depeg during market stress. In June 2022, stETH traded at a ~5% discount to ETH as panic sellers dumped it. If you're borrowing against your liquid staking token, a depeg can trigger liquidation. ## Restaking: Staking Your Staked ETH (Yes, Really) Just when you thought stacking yield couldn't get any more recursive, along came **[EigenLayer](https://app.eigenlayer.xyz/)** with restaking. Here's the idea: your staked ETH (via stETH or native staking) is already securing Ethereum. EigenLayer lets you *restake* that same ETH to simultaneously secure other protocols and services — oracles, bridges, data availability layers — earning additional rewards on top of your staking yield. Think of it as your security deposit for an apartment also being used to guarantee your gym membership. Same capital, multiple jobs, multiple rewards. The ecosystem that's grown around this is wild: - **[EigenLayer](https://docs.eigenlayer.xyz/)** — the restaking protocol itself - **Liquid restaking tokens (LRTs)** — tokens like eETH (from [EtherFi](https://www.ether.fi/)) or pufETH (from [Puffer](https://www.puffer.fi/)) that represent your restaked position - **AVSs (Actively Validated Services)** — the protocols being secured by restaked ETH It's yield on yield on yield. And yes, it also means risk on risk on risk. Each layer adds smart contract risk, slashing risk (your ETH can be penalized if an AVS misbehaves), and complexity risk. The matryoshka doll of DeFi. ## Vaults and Auto-Compounders: Set It and Forget It Not everyone wants to manually harvest rewards, swap tokens, and redeposit every day. That's where vaults come in. **[Yearn Finance](https://yearn.fi/)** pioneered the concept: you deposit tokens into a vault, and Yearn's strategies automatically farm the best yields, compound your returns, and optimize gas costs by batching transactions across all vault users. **[Beefy Finance](https://beefy.com/)** does something similar across multiple blockchains — Ethereum, Polygon, Arbitrum, BSC, you name it. The value proposition is simple: - **Without a vault:** You deposit into a farm, manually claim rewards every few days, swap them back, redeposit, pay gas each time - **With a vault:** You deposit once. The vault does everything automatically. Your position grows over time. Vaults charge a fee (usually a percentage of the yield), but for most people, the convenience and compounding efficiency more than makes up for it. > **💡 Pro tip:** Vaults are great for "set and forget" DeFi. But always check what strategy a vault is using under the hood. You're trusting the vault developers to write secure, profitable strategies. A bug in a vault strategy can drain everyone's funds. ## The Curve Wars: Game Theory Meets DeFi Governance This one's a rabbit hole within a rabbit hole, but it's one of the most fascinating episodes in DeFi history. **[Curve Finance](https://curve.fi/)** is a DEX optimized for stablecoins and similar assets. It has its own token, CRV. If you lock your CRV tokens for up to 4 years, you get **veCRV** (vote-escrowed CRV), which lets you vote on which liquidity pools get the highest CRV rewards. ![The Curve Wars — protocols battling for control of CRV emissions](/assets/blog/crypto-unlocked-11/curve-wars-battle.jpg) Here's where it gets spicy: if you're a protocol and you want people to provide liquidity for *your* token on Curve, you want those CRV rewards directed to your pool. So protocols started **bribing** veCRV holders to vote for their pools. Then **[Convex Finance](https://www.convexfinance.com/)** entered the chat. Convex aggregates CRV from many users, locks it all as veCRV, and lets Convex token holders (vlCVX) control those votes. Suddenly, controlling Convex meant controlling Curve's reward emissions — which meant controlling where billions of dollars in liquidity flowed. Protocols were spending millions to acquire CVX tokens. It became a full-on arms race — the "Curve Wars." This is **veTokenomics** in action: lock tokens → get governance power → direct rewards → create economic incentives. It's game theory, politics, and finance all mashed together. DeFi at its most creative and chaotic. ## Ponzinomics: When Yield Is Too Good to Be True Let's talk about the elephant in the room. Not all DeFi yield is created equal, and some of it is straight-up unsustainable. Here are the red flags: **🚩 Unsustainable APYs** If a protocol is offering 10,000% APY, ask yourself: what business on Earth generates that kind of return? The answer is none. The yield is coming from new depositors' money or token inflation. That's a ticking time bomb. **🚩 OHM forks (3,3) mania** In late 2021, **OlympusDAO** popularized the (3,3) meme — the idea that if everyone stakes and nobody sells, everyone wins. Dozens of copycat protocols launched on every chain, each promising even higher staking APYs. Most went to zero within weeks. The game theory only works when new money keeps entering. When it stops, the music stops. **🚩 Anchor Protocol** The most devastating example. [Anchor Protocol](https://en.wikipedia.org/wiki/Terra_(blockchain)), on the Terra/Luna blockchain, offered a "stable" ~19.5% yield on UST (an algorithmic stablecoin). Billions of dollars poured in. Where did the yield come from? Mostly from reserves that were being steadily depleted. When the reserves ran out and confidence wavered in May 2022, UST depegged, Luna hyperinflated, and [**~$45 billion in market cap was wiped out**](https://en.wikipedia.org/wiki/Terra_(blockchain)) in a single week. People lost their life savings. Terraform Labs later [filed for bankruptcy](https://www.reuters.com/technology/terraform-labs-files-chapter-11-bankruptcy-protection-2024-01-22/) in January 2024. > **⚠️ The golden rule:** If you can't explain where the yield comes from in one sentence using real-world economics, don't put your money in it. "Revenue from trading fees" is a real answer. "Staking rewards from the protocol token" is a circular answer. ## Risk Management: Surviving the DeFi Jungle Advanced DeFi is high-risk by nature. Here's how to manage it: - **Diversify across protocols.** Don't put everything in one vault, one chain, or one strategy. Smart contract hacks happen. - **Understand smart contract risk.** Every protocol you interact with is a potential point of failure. Has the code been audited? By whom? How long has it been running without issues? - **Watch for oracle risk.** Many DeFi protocols depend on price feeds (oracles). If the oracle gets manipulated or goes down, liquidations can cascade. - **Size your positions.** DeFi yield farming should be "money you can afford to lose" territory. Not your emergency fund. Not your rent money. - **Stay updated.** Follow protocol governance proposals. A governance vote can change tokenomics, fee structures, or risk parameters overnight. - **Use established protocols.** There's a reason "battle-tested" is a compliment in DeFi. Aave, Compound, Curve, and Uniswap have survived multiple market cycles. The new fork-of-a-fork launched last Tuesday? Maybe give it a few months. > **🔑 Remember:** In DeFi, you are your own risk manager. There's no FDIC insurance, no customer support hotline, no bailout. The yields can be spectacular precisely *because* the risks are real. ## The Bottom Line Advanced DeFi is the financial frontier — equal parts innovation and chaos. Yield farming can be profitable if you understand the mechanics. Liquid staking and restaking are genuinely useful innovations that improve capital efficiency. Vaults make complex strategies accessible. And the Curve Wars showed that DeFi governance can be as strategic and competitive as any traditional market. But for every GMX generating real fees, there's an OHM fork promising 80,000% APY before collapsing to zero. The difference between building wealth and getting wrecked in DeFi comes down to one thing: understanding where the yield comes from. If you take away one thing from this article, let it be this: **real yield comes from real revenue.** Everything else is musical chairs. ## What's Next We've been exploring DeFi — the decentralized side of crypto trading and finance. But the reality is, most people's first crypto experience is on a centralized exchange like Coinbase or Binance. In **[Part 12](/blog/crypto-unlocked-12-cexs-vs-dexs)**, we'll compare **CEXs vs. DEXs** — centralized and decentralized exchanges. The trade-offs between convenience and control, custody and sovereignty. When should you use each, and what are the real risks of both? See you there.
← [Previous: DEXs & Liquidity Pools](/blog/crypto-unlocked-10-dexs-liquidity-pools) · [Series Index](/blog/series/crypto-unlocked) · [Next: CEXs vs DEXs](/blog/crypto-unlocked-12-cexs-vs-dexs) →
--- --- # Crypto Unlocked Part 12: CEXs vs DEXs URL: /blog/crypto-unlocked-12-cexs-vs-dexs Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Exchange, CEX, DEX, FTX, Beginners Series: Crypto Unlocked (Part 12 of 21) --- Centralized exchanges are easy. Decentralized exchanges are trustless. The FTX collapse showed why the difference matters more than convenience. In November 2022, [FTX](https://en.wikipedia.org/wiki/FTX_(company)) was the third-largest crypto exchange on the planet by volume. Superbowl ads, celebrity endorsements, a founder valued at an estimated $16 billion. Nine days — from the first [CoinDesk exposé on November 2nd](https://www.coindesk.com/business/2022/11/02/divisions-in-sam-bankman-frieds-crypto-empire-blur-on-his-trading-titan-alameda-balance-sheet/) to the bankruptcy filing on November 11th — and it was over. Billions in customer funds — gone. Not hacked. Not exploited by some anonymous attacker. Just... taken. By the people running the exchange. [code block] [code block] If you ever needed a reason to understand the difference between centralized and decentralized exchanges, that's it. ## What Is a Centralized Exchange? A centralized exchange (CEX) is basically a bank for crypto. You create an account, verify your identity, deposit money, and trade. [Binance](https://www.binance.com/), [Coinbase](https://www.coinbase.com/), [Kraken](https://www.kraken.com/) — these are all centralized exchanges. The "centralized" part means there's a company running the show. They match buyers with sellers. They hold your funds in their wallets. They decide which tokens get listed and which don't. They comply with regulations, freeze accounts when told to, and can lock you out if they want. Sound familiar? It should. It works exactly like your regular brokerage or bank — just with crypto instead of stocks or euros. And honestly? For most people getting started, that's fine. CEXs are popular for good reasons: - **Fiat on-ramps** — You can deposit euros or dollars directly from your bank account. This is still the easiest way to turn "normal" money into crypto. - **Simple interfaces** — Coinbase especially is designed for people who've never touched crypto. Buy Bitcoin in three clicks. - **Customer support** — Something goes wrong? There's someone to email (whether they actually respond quickly is another story). - **Liquidity** — Major CEXs handle billions in daily volume. Your trades execute instantly at tight spreads. - **Advanced tools** — Limit orders, stop losses, margin trading, futures. All the trading instruments professionals expect. For buying your first Bitcoin or converting your paycheck into crypto, a CEX is genuinely the path of least resistance. ## The Custody Tradeoff Here's the catch, and it's a big one: when your crypto sits on a centralized exchange, it's not really yours. Remember the golden rule from earlier in this series? **Not your keys, not your coins.** When you deposit crypto on Coinbase, you're handing them your private keys. Your account balance is just a number in their database — an IOU. You're trusting that when you hit "withdraw," they'll actually send you the crypto. That trust is the entire foundation. And most of the time, it works. But "most of the time" isn't "always." Exchanges can: - **Freeze your account** for compliance reasons (or sometimes no clear reason at all) - **Get hacked** — [Mt. Gox in 2014](https://en.wikipedia.org/wiki/Mt._Gox) lost 850,000 Bitcoin. Gone. - **Go bankrupt** — taking your funds with them - **Misuse your deposits** — lending them out, gambling with them, or worse Which brings us to the elephant in the room. ## The FTX Collapse: A Masterclass in Counterparty Risk FTX wasn't some shady back-alley exchange. It was endorsed by Tom Brady. It had its name on an NBA arena. Institutional investors — including [Sequoia Capital](https://en.wikipedia.org/wiki/Sequoia_Capital) and SoftBank — poured billions into it. It was "the responsible one" — the exchange that talked about regulation and compliance. Behind the scenes, FTX was funneling customer deposits to its sister company [Alameda Research](https://en.wikipedia.org/wiki/Alameda_Research), which was using the money for risky trades and personal expenses. When the market turned and customers tried to withdraw, the money simply wasn't there. [code block] $8 billion in customer funds. Vanished. Federal prosecutors later called it ["one of the biggest financial frauds in American history."](https://www.justice.gov/usao-sdny/pr/united-states-attorney-announces-charges-against-ftx-founder-samuel-bankman-fried) In November 2023, [Sam Bankman-Fried was convicted](https://www.nytimes.com/2023/11/02/technology/sam-bankman-fried-fraud-trial-ftx.html) on all seven criminal counts of fraud and conspiracy. > **The lesson:** Counterparty risk is real. When you trust a third party with your money, you're exposed to everything they do with it — including things they'll never tell you about. The FTX collapse didn't happen because of a smart contract bug or a blockchain failure. It happened because humans had custody of other people's money and chose to abuse that trust. The blockchain worked perfectly fine. The centralized institution built on top of it didn't. ## Proof of Reserves: Trust, but Verify? After FTX imploded, surviving exchanges scrambled to prove they weren't doing the same thing. The result: **[Proof of Reserves (PoR)](https://niccarter.info/proof-of-reserves/)**. The idea is simple — an exchange publishes cryptographic proof that they hold enough assets to cover all customer deposits. Some use Merkle trees (a data structure that lets you verify your account is included without revealing everyone else's). Others hire third-party auditors. It's a step in the right direction, but it's not bulletproof: - **Snapshot problem** — PoR shows assets at one point in time. An exchange could borrow funds for the snapshot and return them the next day. - **Liabilities aren't always included** — Proving you *have* assets doesn't prove you don't *owe* more than you have. - **Auditor quality varies** — Some "audits" are barely more than a press release. Proof of Reserves is better than blind trust. But it's still trust. You're trusting the methodology, the auditor, and the exchange's honesty about their liabilities. You know what requires zero trust? A decentralized exchange. ## What Is a DEX? A decentralized exchange (DEX) is a platform where you trade crypto directly with other people — no company in the middle. No accounts. No identity verification. No one holding your funds. Instead of a company matching orders, a DEX runs on smart contracts — code deployed on a blockchain that executes trades automatically. You connect your wallet, approve a transaction, and the swap happens on-chain. Your crypto goes directly from your wallet to the other person's wallet (or to a liquidity pool — more on that in a second). Popular DEXs include [Uniswap](https://uniswap.org/) (Ethereum), [PancakeSwap](https://pancakeswap.finance/) (BNB Chain), and [Jupiter](https://jup.ag/) (Solana). You can track DEX volume across all chains on [DefiLlama](https://defillama.com/dexs). The key difference: **you never give up custody of your funds.** Your crypto stays in your wallet until the exact moment a trade executes. There's no deposit step. There's no balance on someone else's server. There's no FTX scenario possible because there's no FTX. ## Order Book DEXs vs AMM DEXs Not all DEXs work the same way. There are two main flavors: ### Order Book DEXs These work like traditional exchanges — buyers post bids, sellers post asks, and the system matches them. The difference is the order book lives on-chain (or in a hybrid on/off-chain setup). Examples: [dYdX](https://dydx.exchange/), Serum (now [OpenBook](https://openbookdex.com/) on Solana). The advantage is precise pricing and familiar trading mechanics. The downside is that on-chain order books are expensive and slow on networks like Ethereum, which is why they tend to live on faster chains. ### AMM DEXs (Automated Market Makers) This is where things get interesting — and where most of the DEX innovation has happened. Instead of matching individual buyers and sellers, AMMs use **liquidity pools**. People deposit pairs of tokens into a pool (say, ETH and USDC), and a mathematical formula automatically determines the price based on the ratio of tokens in the pool. When you trade on Uniswap, you're not trading with another person. You're trading with a pool of tokens, and the price adjusts based on supply and demand within that pool. It sounds weird, but it works remarkably well. AMMs solved the chicken-and-egg problem that killed earlier DEX attempts — you don't need a buyer for every seller. You just need liquidity in the pool. > **Think of it like this:** An order book exchange is like a farmers' market where buyers and sellers haggle directly. An AMM is like a vending machine — the price is set by a formula, and it adjusts automatically based on what's left inside. ## The UX Gap Let's be honest: using a DEX for the first time is harder than using Coinbase. On a CEX, you sign up with an email, deposit money, and click "Buy." On a DEX, you need to: 1. Already have a self-custody wallet ([MetaMask](https://metamask.io/), [Phantom](https://phantom.app/), etc.) 2. Already have crypto in that wallet to trade with (and to pay gas fees) 3. Understand which network you're on 4. Approve token spending permissions 5. Watch out for slippage, front-running, and MEV 6. Accept that there's no customer support if you send tokens to the wrong address The gap is shrinking — wallet UX is improving, and DEX aggregators like [1inch](https://1inch.io/) find you the best prices automatically. But we're not at "grandma can use it" territory yet. For total beginners, CEXs remain the on-ramp. And that's okay. You can start with a CEX and graduate to DEXs as you get comfortable. ## KYC, Privacy, and Access Here's another fundamental difference that matters more than you'd think. **CEXs require KYC (Know Your Customer).** Before you can trade, you submit your ID, proof of address, sometimes even a selfie. This is a regulatory requirement — anti-money laundering (AML) laws mandate it in most countries. That means: - Your identity is linked to your trading activity - The exchange can (and will) share data with tax authorities - People in certain countries may be blocked entirely - Your account can be frozen based on government requests **DEXs are permissionless.** No account. No ID. No restrictions based on nationality. You connect a wallet and trade. Nobody can freeze your wallet or block your access to the smart contract. This isn't about "hiding something." For billions of people worldwide, permissionless access is the only option. If you live in a country with a broken banking system, capital controls, or authoritarian oversight, a DEX might be the only way you can participate in the global financial system. > **Important:** Permissionless doesn't mean regulation-free. Tax obligations still apply to you personally, even when using DEXs. The blockchain is public — every trade you make is recorded and visible forever. DEXs give you access; they don't give you invisibility. ## When to Use What This isn't an either/or decision. Smart crypto users use both: **Use a CEX when:** - Buying crypto with fiat (bank transfer, card) - You want simple UI and customer support - Trading high-volume pairs with tight spreads - You need advanced order types (stops, limits) - You're just getting started **Use a DEX when:** - Trading tokens not listed on CEXs - You want to maintain self-custody - Participating in DeFi (yield farming, LPs) - You value privacy and permissionless access - You don't trust third parties with your funds The pragmatic approach: **use a CEX as an on-ramp** (convert fiat to crypto), then **transfer to your own wallet** and use DEXs for everything else. Best of both worlds. ## The Hybrid Future The industry is converging. Both sides are borrowing from each other: - **CEXs are adding Proof of Reserves** and on-chain transparency - **DEXs are improving UX** with better interfaces, gasless transactions, and fiat on-ramps - **Hybrid models** are emerging — platforms that offer CEX-like simplicity with non-custodial wallets under the hood - **[Account abstraction](https://eips.ethereum.org/EIPS/eip-4337)** (ERC-4337 and beyond) is blurring the line between custodial and self-custody, making wallet management invisible to users We're heading toward a world where the average user won't even know — or care — whether they're using a CEX or a DEX. The trade will just work, and custody will default to self-custody without requiring a PhD in key management. But we're not there yet. So for now, understanding the tradeoffs matters. ## The Bottom Line Centralized exchanges are convenient. They're the front door to crypto for most people, and they serve that role well. But every time you leave funds on a CEX, you're trusting a company with your money — and history has shown that trust can be catastrophically misplaced. Decentralized exchanges remove that trust entirely. They're harder to use, sometimes more expensive, and can't help you if you make a mistake. But nobody can take your funds, freeze your account, or gamble with your deposits. The FTX collapse wasn't a crypto failure. It was a centralization failure. The blockchain did exactly what it was designed to do. The humans running the exchange didn't. **Not your keys, not your coins.** It's not just a slogan. It's a survival strategy. ## What's Next Now that you understand the CEX vs DEX landscape, we're going to go deep on the decentralized side. In **[Part 13](/blog/crypto-unlocked-13-spot-dexs)**, we'll explore **spot DEXs** in detail — how AMMs actually work under the hood, what liquidity pools really are, how to evaluate a DEX, and how to make your first swap without getting wrecked by slippage or MEV bots. See you there.
← [Previous: Advanced DeFi](/blog/crypto-unlocked-11-advanced-defi) · [Series Index](/blog/series/crypto-unlocked) · [Next: Spot DEXs](/blog/crypto-unlocked-13-spot-dexs) →
--- --- # Crypto Unlocked Part 13: Spot DEXs — The AMM Revolution URL: /blog/crypto-unlocked-13-spot-dexs Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, DEX, Uniswap, Jupiter, Curve, Beginners Series: Crypto Unlocked (Part 13 of 21) --- From Uniswap to Jupiter to Curve — a tour of the major decentralized spot exchanges, how each one works differently, and which to use when. You walk into a farmers' market. There's no Walmart greeter, no checkout lane, no corporate HQ deciding what goes on the shelves. Farmers show up, set prices, buyers browse, and deals happen peer-to-peer. That's a decentralized exchange — a **DEX** — but for crypto. No sign-ups, no KYC, no one holding your funds. Just you, your wallet, and a smart contract acting as the world's most transparent middleman. In [Part 12](/blog/crypto-unlocked-12-cexs-vs-dexs) we covered the differences between centralized and decentralized exchanges. Now let's zoom into where most of DeFi's action actually happens: **spot DEXs** — the places where you swap one token for another, right now, at market price. There are dozens of them across every chain, each with a different twist. Let's tour the heavyweights. ![How an Automated Market Maker (AMM) works — liquidity pools replace traditional order books](/assets/blog/crypto-unlocked-13/amm-explainer.png) ## Uniswap — The OG That Changed Everything If one project kicked off the DEX revolution, it's [**Uniswap**](https://uniswap.org). Launched in 2018 on Ethereum, it proved you don't need an order book to run an exchange. Instead, it introduced the **Automated Market Maker (AMM)** — a formula (x × y = k) that lets liquidity pools set prices automatically based on supply and demand. Here's the quick evolution: - **V2 (2020):** The classic. Any ERC-20 token pair, simple 50/50 liquidity pools. Worked beautifully but was capital-inefficient — your liquidity was spread across every possible price, even ones that would never get hit. - **V3 (2021):** Introduced **concentrated liquidity**. Instead of spreading money across the entire price range, LPs (liquidity providers) choose a specific range. Think of it like a street musician choosing to play in the busy square instead of an empty alley — same effort, way more tips. - **V4 (2025):** Added [**hooks**](https://docs.uniswap.org/contracts/v4/overview) — customizable plugins that let developers bolt new logic onto pools. Custom fee structures, on-chain limit orders, dynamic fees that adjust with volatility, custom oracle logic, and even entirely custom pricing curves. A new **singleton design** puts all pools in one contract for massive gas savings, and **flash accounting** (via EIP-1153 transient storage) nets token transfers so you only pay the final balance. It turned Uniswap from a DEX into a DEX *platform*. - **[Unichain](https://unichain.org) (2025):** Uniswap's own Layer 2 chain, built on the **OP Stack** (Optimism Superchain) and purpose-built for DeFi. 200ms sub-blocks (10× faster than most L2s), executed inside a TEE (Trusted Execution Environment) for MEV protection, with 65% of sequencer revenue committed back to validators. The DEX got its own country. > 💡 **Beginner tip:** Uniswap is the default choice on Ethereum and most EVM chains. If you're swapping tokens on Ethereum, Arbitrum, Polygon, or Base — start here. The interface at [app.uniswap.org](https://app.uniswap.org) is clean and battle-tested. ## Jupiter — Solana's Swiss Army Knife If Uniswap is the king of Ethereum DEXs, [**Jupiter**](https://jup.ag) is the emperor of Solana. But Jupiter isn't really a DEX itself — it's an **aggregator** that routes your trade across *every* Solana DEX to find the best price. Think of it as a travel search engine that checks every airline for the cheapest flight. What makes Jupiter special: - **Smart routing:** Powered by Jupiter's **Juno Liquidity Engine**, your swap might hop through Raydium, Orca, and three other pools in a single transaction to minimize slippage. Ultra Swap even simulates *executed* prices (not just quotes) to pick the route with the least real-world slippage. You don't see any of that complexity — you just get a better price. - **Limit orders:** Set a target price and Jupiter fills it when the market gets there. No more staring at charts. - **DCA (Dollar-Cost Averaging):** Automatically buy a token in chunks over time. Set it and forget it. - **JLP (Jupiter Liquidity Pool):** A basket of assets (SOL, ETH, BTC, stablecoins) that earns fees from Jupiter's perpetual trading platform. It's like an index fund that also earns trading fees. > 💡 **Pro tip:** On Solana, *always* route through Jupiter rather than going directly to individual DEXs. The aggregation almost always saves you money. Bookmark [jup.ag](https://jup.ag). ## Curve — The Stableswap Specialist Need to swap USDC for USDT? Or one type of wrapped Bitcoin for another? [**Curve**](https://curve.finance) is your place. While Uniswap uses a general-purpose formula, Curve built a custom bonding curve optimized for assets that *should* trade near the same price — stablecoins, wrapped versions of the same asset, and liquid staking tokens. The result? **Dramatically lower slippage** on stable-pair swaps. On Uniswap, swapping $1M of USDC→USDT might cost you in slippage. On Curve, that same trade barely moves the needle. But Curve's real claim to fame is the **Curve Wars** — a meta-game of governance and incentives: - **veCRV:** Lock CRV tokens to get voting power. Votes direct where Curve's emission rewards flow. - **The Wars:** Protocols like Convex and Yearn compete to accumulate veCRV, because controlling those votes means directing yields to their preferred pools. It's like a political lobbying battle, but for liquidity. Sounds complicated? It is. But as a regular user, you just need to know: **Curve = stablecoin swaps and deep stable liquidity**. ## Raydium — Solana's Hybrid Engine [**Raydium**](https://raydium.io) was one of Solana's first major DEXs and does something clever: it combines an **AMM with an order book** (originally sharing liquidity with Serum/OpenBook's central limit order book). This hybrid approach means tighter spreads and more efficient price discovery than a pure AMM. Raydium is also where a *lot* of new Solana token launches happen. If a memecoin graduates from a launchpad like Pump.fun, it often migrates to a Raydium pool. That makes Raydium the de facto "listing venue" for Solana's long tail of tokens. ## PancakeSwap — The BNB Chain Powerhouse [**PancakeSwap**](https://pancakeswap.finance) did for BNB Chain (formerly Binance Smart Chain) what Uniswap did for Ethereum — but cheaper and with more gamification. It's got: - The classic AMM swap (now with v3 concentrated liquidity too) - Lottery and prediction markets - NFT marketplace - Multi-chain expansion to Ethereum, Arbitrum, Base, and more PancakeSwap was the gateway DEX for millions of users who found Ethereum's gas fees too high in 2021. It's still BNB Chain's dominant exchange by a wide margin and remains a solid choice for swaps on that network. ## Orca — Solana's Concentrated Liquidity Pioneer [**Orca**](https://orca.so) brought concentrated liquidity to Solana with its **Whirlpools** — think Uniswap V3-style positions but with Solana's speed and low fees. The interface is clean, the UX is beginner-friendly, and it's one of the most capital-efficient DEXs on Solana. For liquidity providers, Orca's Whirlpools allow much tighter ranges and higher fee capture compared to traditional AMMs. If you're LPing on Solana, Orca is where the sophisticated money goes. ## Aerodrome — Base Chain's ve(3,3) King [**Aerodrome**](https://aerodrome.finance) is the leading DEX on **Base** (Coinbase's L2), and it runs on the **ve(3,3) model** pioneered by André Cronje's Solidly and refined by Velodrome on Optimism. The ve(3,3) model is a governance flywheel: - Lock AERO tokens → get veAERO (vote-escrowed AERO) - Vote on which pools receive emissions → earn trading fees and bribes from those pools - Protocols bribe veAERO voters to direct emissions toward their liquidity → deeper liquidity → more trading → more fees → more bribes It aligns incentives between the exchange, liquidity providers, and the protocols building on top. Aerodrome exploded in 2024-2025 and became one of the highest-revenue DEXs across all chains. > 💡 **Why it matters:** If you're trading on Base, Aerodrome is likely offering the deepest liquidity. If you want to earn yield, its ve(3,3) model offers compelling rewards for active governance participants. ## Trader Joe — The Liquidity Book [**Trader Joe**](https://traderjoexyz.com) started on Avalanche and expanded to Arbitrum and BNB Chain. Its standout feature is the **Liquidity Book** model — instead of a continuous price curve, liquidity is organized into discrete price **bins** (think of tiny buckets at exact price points). This gives LPs precise control over where they deploy capital and results in zero-slippage trades within a single bin. It's a different flavor of concentrated liquidity that some find more intuitive than Uniswap V3's range model. ## Osmosis — The Cosmos Hub In the **Cosmos ecosystem**, blockchains communicate via IBC (Inter-Blockchain Communication), and [**Osmosis**](https://osmosis.zone) is the central trading hub that connects them all. It's the place to swap ATOM, OSMO, and dozens of Cosmos chain tokens. Osmosis is unique because it's its own blockchain — an appchain DEX with custom modules for superfluid staking (stake your LP tokens and secure the network simultaneously). If you're in the Cosmos world, Osmosis is where you trade. ## Camelot — Arbitrum's Launchpad DEX [**Camelot**](https://camelot.exchange) carved out a niche as **Arbitrum's native DEX and launchpad**. Beyond standard swaps, it focuses on being a launch partner for new Arbitrum projects — providing initial liquidity, custom pool types, and community-driven incentives. It features Nitro pools (incentivized staking positions with specific conditions), dynamic directional fees, and a strong community-first ethos. Think of it as the local boutique exchange that knows every new Arbitrum project by name. | Type | How It Works | Best For | Examples | |------|-------------|----------|----------| | **AMM** | Liquidity pools + constant product formula | General token swaps | Uniswap, Raydium, Orca | | **Stableswap** | Optimized curve for same-peg assets | Stablecoin & LST swaps | Curve, Balancer | | **ve(3,3)** | Vote-escrowed tokens direct emissions to pools | Community-governed liquidity | Velodrome, Aerodrome | | **Aggregator** | Routes across multiple DEXs for best price | Large swaps, MEV protection | 1inch, CowSwap, Jupiter | | **Order Book** | On-chain limit orders matched by engine | Precision trading | dYdX, Hyperliquid | ## Aggregators — Your Secret Weapon Here's an important concept: **you don't have to pick one DEX**. Aggregators check prices across multiple DEXs and route your trade through the optimal path — sometimes splitting it across several pools. The big three on Ethereum/EVM: - [**1inch**](https://1inch.io): The OG aggregator. Clean interface, Fusion mode for gasless swaps using a Dutch auction mechanism. Works across most EVM chains. - [**CowSwap**](https://swap.cow.fi): Uses a unique **batch auction** system where traders' orders are matched peer-to-peer first (saving on fees), and only the remainder goes to on-chain DEXs. Also offers MEV protection — your trade won't get sandwich-attacked. - **ParaSwap:** Another solid aggregator with smart routing and a focus on DeFi power users. Offers a nice API for developers building on top. > 💡 **The golden rule:** For any swap over a few hundred dollars on Ethereum or EVM chains, try an aggregator first. The price improvement often pays for itself compared to going directly to a single DEX. CowSwap is my personal pick for MEV protection. ## Quick Reference: Which DEX for Which Chain? - **Ethereum** → Uniswap / CowSwap — Deepest liquidity + MEV protection - **Solana** → Jupiter — Aggregates everything - **Base** → Aerodrome — ve(3,3) flywheel, deepest Base liquidity - **Arbitrum** → Uniswap / Camelot — Depends on the token - **BNB Chain** → PancakeSwap — Dominant by far - **Avalanche** → Trader Joe — Liquidity Book efficiency - **Cosmos** → Osmosis — IBC hub - **Stablecoin swaps** → Curve — Purpose-built for stable pairs ## The Big Picture What's remarkable is how *different* each of these DEXs is. They're not just Uniswap clones — they're genuine experiments in market design. Concentrated liquidity, ve(3,3) flywheels, liquidity books, batch auctions, hybrid order books — DeFi is a live laboratory for financial innovation that would take decades to play out in traditional finance. And remember: on a DEX, **you never give up custody of your funds**. Your tokens go from your wallet to the smart contract and back — no intermediary holds them overnight. That's the fundamental promise of DeFi. --- ## What's Next? In **[Part 14](/blog/crypto-unlocked-14-perpetual-dexs-hyperliquid)**, we're moving from spot trading to the wild world of **perpetual DEXs** — decentralized platforms where you can trade with leverage, go long or short, and access derivative-style trading without a centralized exchange. We'll do a deep dive into **Hyperliquid**, the protocol that's challenging CEXs at their own game, plus GMX, dYdX, and the rest of the perps landscape. Buckle up — it gets spicier from here. 🌶️
← [Previous: CEXs vs DEXs](/blog/crypto-unlocked-12-cexs-vs-dexs) · [Series Index](/blog/series/crypto-unlocked) · [Next: Perpetual DEXs & Hyperliquid](/blog/crypto-unlocked-14-perpetual-dexs-hyperliquid) →
--- --- # Crypto Unlocked Part 14: Perpetual DEXs & Hyperliquid URL: /blog/crypto-unlocked-14-perpetual-dexs-hyperliquid Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Hyperliquid, Perpetuals, DEX, Trading, Beginners Series: Crypto Unlocked (Part 14 of 21) --- Trade futures without an exchange. Hyperliquid built an entire blockchain just to run the fastest on-chain order book ever. Here's how perp DEXs work and why they're eating centralized exchange volume. What if I told you there's a type of crypto trading that does more daily volume than all of DeFi's lending, swapping, and yield farming _combined_? And that it used to be exclusively controlled by centralized exchanges — until a team with zero VC funding built an entire blockchain from scratch just to change that? Welcome to the world of perpetual futures. This is where serious money moves, where traders make (and lose) fortunes in minutes, and where one of crypto's most impressive technical achievements — [Hyperliquid](https://app.hyperliquid.xyz) — is quietly eating Binance's lunch. Buckle up. This is a big one. ![Hyperliquid DEX trading interface showing HYPE/USDC perpetual futures with candlestick chart, order book, and professional trading controls](/assets/blog/crypto-unlocked-14/perp-trading-interface.jpg) ## What Are Perpetual Futures? Let's start simple. A **futures contract** is a bet on where the price of something is going. You don't buy the actual asset — you buy a contract that pays you if the price goes up (or down, depending on your position). Traditional futures have an expiry date. Oil futures expire in March. Wheat futures expire in June. When expiry hits, the contract settles and you're done. **Perpetual futures** (or "perps") are the crypto twist: they _never expire_. You can hold your position for five minutes or five months. There's no settlement date, no rolling over contracts. It's an endless bet on price. Think of it like renting vs. owning. Spot trading is buying a house — you own the ETH. Perps are like renting — you get exposure to the price without actually holding the asset. And just like rent, you pay a small ongoing fee to keep your position open. ## How Perps Stay Pegged: Funding Rates Here's the clever part. If perpetual contracts never expire, what stops them from drifting away from the actual spot price? The answer is **funding rates** — a mechanism so elegant it deserves its own explanation. Every few hours (typically every 8 hours), traders on one side pay traders on the other: - **When more people are long** (betting price goes up): longs pay shorts. This discourages piling into longs and pushes the perp price back down toward spot. - **When more people are short** (betting price goes down): shorts pay longs. Same logic in reverse. The rate fluctuates based on supply and demand. During a massive bull run, funding rates can spike — meaning it gets expensive to be long. During a crash, it flips, and shorts pay through the nose. > **💡 Pro insight:** Savvy traders actually _farm_ funding rates. They buy spot and short perps simultaneously, capturing the funding payments while being market-neutral. It's called a "cash-and-carry" trade, and it's one of the lower-risk strategies in crypto. ## Long, Short, Leverage, Liquidation — The Basics Let's demystify the jargon: - **Going long** = betting the price goes up. You profit when price rises. - **Going short** = betting the price goes down. You profit when price falls. (This is a big deal — on spot markets, you can't short. You can only sell what you have.) - **Leverage** = borrowing power. With 10x leverage, your $100 acts like $1,000. A 10% move in your favor means 100% profit. But a 10% move _against_ you... - **Margin** = the collateral you put up. It's your "skin in the game" that backs the leveraged position. - **Liquidation** = when the market moves against you enough that your margin is wiped out. The protocol force-closes your position. Your money is gone. It happens fast. [code block] Think of leverage like driving speed. 2x leverage is cruising at 60mph — reasonable, manageable. 10x is 300mph — exhilarating until something goes wrong. 50x or 100x? You're strapping yourself to a rocket. Most rockets explode. > **⚠️ Reality check:** The vast majority of leveraged traders lose money. Exchanges make a fortune from liquidations. If you're a beginner, watch and learn before you touch leverage. And if you do, start at 2-3x max. Seriously. ## Why Trade Perps at All? If perps are so risky, why do they dominate crypto trading volume? A few reasons: - **Shorting**: On spot markets, you can only sell what you own. Perps let you profit from falling prices — essential for hedging or trading bear markets. - **Leverage**: Sometimes you _want_ amplified exposure. A disciplined trader using 2-3x leverage with tight stop-losses can be highly capital efficient. - **Hedging**: If you're holding a bag of ETH you don't want to sell (maybe you're staking it), you can short ETH perps to protect against downside. Your spot position and perp position offset each other. - **Capital efficiency**: Why lock up $10,000 in spot when you can get the same exposure with $1,000 on perps and deploy the rest elsewhere? - **No expiry hassle**: Unlike traditional futures, you don't need to manage rolling positions. Open it, set your stops, walk away. For years, this was all happening on centralized exchanges. Binance, Bybit, OKX — they processed billions in perp volume daily. But they had a problem: you had to trust them with your money. And as FTX proved, that trust can be catastrophically misplaced. Enter decentralized perp DEXs. Enter **[Hyperliquid](https://app.hyperliquid.xyz)**. ## Hyperliquid: The Exchange That Built Its Own Blockchain This is where I get genuinely excited. Hyperliquid isn't just another perp DEX — it's a case study in what happens when obsessive engineers refuse to compromise. ### The Origin Story Most DeFi protocols launch on an existing blockchain. Ethereum, Arbitrum, Solana — pick your chain, deploy your contracts, ship it. But the Hyperliquid team (led by Jeff Yan, a Harvard math/CS grad and former quant trader) had a problem: no existing chain was fast enough. They wanted to build a fully on-chain order book — every order, every cancellation, every trade recorded on the blockchain. To match the speed of centralized exchanges (we're talking sub-second execution), they needed a chain that could handle _tens of thousands of transactions per second_ with sub-second finality. So they built one. From scratch. **[HyperBFT](https://hyperliquid.gitbook.io/hyperliquid-docs)** is Hyperliquid's custom Layer 1 consensus mechanism, inspired by [Meta's HotStuff protocol](https://research.facebook.com/publications/hotstuff-bft-consensus-with-linearity-and-responsiveness/) and its successors. It's optimized for one thing: being the fastest possible settlement layer for a trading exchange. The L1 currently supports up to **200,000 orders per second** with sub-second block finality — and throughput is constantly improving as the node software is further optimized. The result? An order book that runs entirely on-chain with the speed of a CEX and the transparency of a blockchain. Every trade is verifiable. No hidden market makers. No exchange trading against its own users. No "we swear we didn't see your stop-loss and hunt it." ### Fully On-Chain Order Book (Not an AMM!) This distinction matters. Most DeFi exchanges use AMMs (automated market makers) — those liquidity pools we covered earlier in this series. AMMs are great for simple swaps, but they're terrible for serious trading. The slippage is bad, capital efficiency is poor, and you can't do sophisticated order types. Hyperliquid runs a **central limit order book** (CLOB) — the same type of system that powers the NYSE, Nasdaq, and Binance. Limit orders, market orders, stop-losses, take-profits — they all work exactly like you'd expect from a real exchange. The difference is that the matching engine lives on a blockchain instead of in Binance's data center. Why does this matter? - **Transparency**: You can verify every trade, every order, every liquidation on-chain. No more wondering if the exchange is front-running you. - **Self-custody**: Your funds live in your wallet until the moment they're used. No depositing to a centralized custodian and praying they don't pull an FTX. - **Censorship resistance**: No KYC (for now), no account freezes, no geographic restrictions. A trader in Lagos has the same access as a trader in London. ### No VC Funding — Community First Here's what makes Hyperliquid culturally unique in crypto: **they took zero venture capital money.** No Andreessen Horowitz. No Paradigm. No Sequoia. The team self-funded development. Why does this matter? Because in crypto, VC-funded projects have a nasty habit of treating their community as exit liquidity. VCs get cheap tokens early, then dump them on retail. Hyperliquid flipped that model on its head. ### The $HYPE Airdrop On November 29, 2024, Hyperliquid executed one of the largest and most celebrated airdrops in crypto history. They distributed **31% of the total HYPE token supply** — 310 million tokens out of a 1 billion total supply — to early users of the platform. The remaining allocation reserved 38.888% for future emissions and community rewards, and 30.112% for team and contributors (with a vesting schedule). No insider allocation games. No tiered structures favoring whales. People who had been actively trading on the platform received life-changing amounts of tokens. Some early adopters received hundreds of thousands of dollars worth of HYPE. The token launched at around $2 and quickly ripped past $30 within weeks. The community went absolutely feral (in the best way). To this day, [HYPE holders](https://coinmarketcap.com/currencies/hyperliquid/) are some of the most loyal and vocal in all of crypto — because the team earned that loyalty by putting users first. The protocol also uses trading fee revenue to conduct [ongoing HYPE buybacks](https://www.dlnews.com/articles/defi/hyperliquid-hype-token-buyback-1bn-but-is-it-sustainable/) through its Assistance Fund, creating constant buy pressure. > **💡 Note:** The HYPE airdrop became the gold standard for how to launch a token. Fair distribution, rewarding actual users, no VC dumping. Every project since gets compared to it. ### HyperEVM: From Exchange to Ecosystem Hyperliquid started as a perps exchange, but the team had bigger plans. With the launch of **[HyperEVM](https://hyperliquid.gitbook.io/hyperliquid-docs/hyperevm)** in February 2025 — a general-purpose EVM (Ethereum Virtual Machine) compatible execution environment — Hyperliquid became a full-fledged blockchain ecosystem. Crucially, HyperEVM is _not_ a separate chain. It runs under the same HyperBFT consensus as HyperCore (the order book layer), meaning EVM smart contracts can directly read prices from and send orders to the native spot and perp order books. A lending protocol can liquidate positions through HyperCore's order books in just a few lines of Solidity code. This tight integration is a massive architectural advantage. This means developers can now build any type of DeFi application on Hyperliquid: lending protocols, stablecoins, NFT marketplaces, whatever. All of these apps can natively interact with the Hyperliquid order book and the liquidity it provides. You can explore the growing ecosystem at [HypurrCo](https://www.hypurr.co/ecosystem-projects) or [HL Eco](https://hl.eco/projects). It's an ambitious play: build the best exchange first, attract liquidity, then expand into an entire financial ecosystem anchored by that liquidity. Think of it like Amazon starting with books and expanding to... everything. ### Vaults and Builder Codes Two more features worth highlighting: **Vaults** are Hyperliquid's answer to copy trading. Top traders can create vaults that others deposit into. The vault automatically mirrors the trader's positions. It's like a decentralized hedge fund — you can follow a skilled trader's strategy without managing trades yourself. **Builder codes** are an ecosystem incentive mechanism. Developers who build frontends, tools, or integrations for Hyperliquid can attach their builder code to trades routed through their applications, earning a share of trading fees. It's like an affiliate program baked into the protocol itself. ### The Numbers Speak As of early 2026, Hyperliquid regularly processes **$5-10 billion in daily trading volume** — putting it in direct competition with the derivatives offerings of major centralized exchanges. For a protocol that launched its token barely a year ago, with no VC backing, these numbers are staggering. The platform supports 100+ trading pairs, with leverage up to 50x on major assets. And all of it is running on-chain, fully transparent, with sub-second execution times. The exchange has also become a hub for [commodities trading](https://www.dlnews.com/articles/markets/why-hype-token-is-surging-amid-a-silver-and-gold-trading-frenzy/) — with silver and gold markets regularly generating hundreds of millions in daily volume — and for pre-launch token speculation. ## The Wider Perp DEX Landscape Hyperliquid is the current king, but it didn't build in a vacuum. Here's how the other major perp DEXs compare: ![Weekly perpetual DEX volumes from 2021 to 2024 — explosive growth in late 2024 surpassing 100B, showing dozens of competing protocols](/assets/blog/crypto-unlocked-14/perp-dex-landscape.jpg) ### dYdX — The OG [dYdX](https://dydx.exchange) was the first serious decentralized perpetuals exchange. Originally built on Ethereum (then StarkWare for scaling), the team made the bold move of migrating to their own **Cosmos-based blockchain** ([dYdX Chain](https://www.dydx.xyz/)) in late 2023. They wanted full control over the validator set and fee structure. With over $1.5 trillion in lifetime volume and 220+ markets, it's a solid platform with deep history, though Hyperliquid has overtaken it in daily volume and mindshare. ### GMX — The Real Yield Pioneer [GMX](https://gmx.io) introduced a model that got DeFi degens excited. In V1 it was the **GLP pool**; in the current V2, it evolved into **[GM Pools](https://docs.gmx.io/docs/providing-liquidity)** — isolated liquidity pools per trading pair, improving risk management. Liquidity providers deposit assets into these pools, and traders trade against them using Chainlink Data Stream oracle prices. When traders lose (and statistically, most do), LP holders profit. GMX popularized the concept of "real yield" — earning fees from actual economic activity rather than inflationary token emissions. It runs on Arbitrum and Avalanche, with leverage up to 100x. ### Vertex — The Hybrid [Vertex Protocol](https://vertexprotocol.com) combines an order book _and_ an AMM in one system, with cross-margin across all positions. It's fast, capital-efficient, and supports spot, perps, and money markets in one interface. Think of it as the Swiss Army knife of perp DEXs. Lives on Arbitrum. ### Solana Contenders: Drift & Jupiter Perps **[Drift Protocol](https://www.drift.trade)** is Solana's leading native perps platform, offering cross-margined perpetuals with an order book model that takes advantage of Solana's speed. It supports over 50 markets with up to 101x leverage on SOL, BTC, and ETH perps, and has processed over $50 billion in cumulative volume. **[Jupiter Perps](https://jup.ag/perps)** leverages the JLP (Jupiter Liquidity Provider) pool — similar to GMX's model. Given Jupiter's dominance as Solana's aggregator, its perps product has seen massive adoption. If you're already in the Solana ecosystem, Jupiter perps feel like a natural extension. ### Gains Network (gTrade) — Beyond Crypto [gTrade](https://gains.trade) is fascinating because it goes beyond crypto assets. You can trade **290+ assets across crypto, forex, stocks, indices, and commodities** as perpetuals — all on-chain. Want to long EUR/USD or short the S&P 500 from a DeFi wallet? gTrade lets you do that, with leverage up to 500x on some pairs. Powered by the GNS token and deployed on Arbitrum and Polygon, it's processed over $125 billion in total volume. ### Kwenta / Synthetix — Synthetic Everything [Synthetix](https://www.synthetix.io) takes the synthetic approach to its logical extreme. Every asset is a synthetic representation powered by Synthetix's collateral pool. Originally, Kwenta served as the frontend, but the ecosystem has consolidated — Synthetix now runs its own [exchange](https://exchange.synthetix.io) with perps live on **Ethereum mainnet** (not just L2s). You can trade with multicollateral margin using ETH, wstETH, cbBTC, or sUSDe. Think of it as the most DeFi-native approach to perpetuals, with the security of Ethereum L1 custody and no bridging required. ## The Great Migration: CEX to DEX ![CEX vs DEX on-chain transaction volume from 2017–2022 — DEX volume surges past CEX in mid-2021 before both converge in the bear market](/assets/blog/crypto-unlocked-14/cex-to-dex-migration.jpg) Here's the macro trend that makes this chapter so important: **volume is steadily migrating from centralized to decentralized exchanges.** In 2022, decentralized perp DEXs handled roughly 1-2% of total crypto derivatives volume. By 2025, that number crossed 10% and is climbing. The reasons are structural: 1. **Trust deficit**: FTX's collapse showed the world that centralized exchanges can steal your money. Every new scandal pushes more volume on-chain. 2. **Better tech**: Platforms like Hyperliquid proved you don't have to sacrifice speed or UX for decentralization. The gap has closed. 3. **Composability**: On-chain perps can integrate with lending, staking, and other DeFi protocols. CEX perps are siloed. 4. **Global access**: No KYC means a farmer in Nigeria and a developer in Vietnam can access the same markets as a Wall Street trader. 5. **Airdrop incentives**: Let's be honest — the potential for future airdrops has driven massive volume to new DEXs. Hyperliquid proved that early usage can pay off enormously. This doesn't mean CEXs are dying. Binance and Bybit still dominate total volume. But the direction is clear: on-chain derivatives are growing faster than any other segment in DeFi, and the technology gap between CEX and DEX shrinks every month. > **🔮 My take:** In 5 years, the idea of depositing funds to a centralized exchange to trade derivatives will feel as outdated as calling your broker to place a stock trade. The infrastructure is being built _right now_, and Hyperliquid is leading the charge. ## Quick Reference: Perp DEX Comparison - **Hyperliquid** — Own L1 (HyperBFT) · On-chain order book · CEX speed, no VCs, massive airdrop - **dYdX** — Own chain (Cosmos) · Order book · OG perp DEX, battle-tested - **GMX** — Arbitrum / Avalanche · GM Pools (oracle-based) · Real yield to LPs - **Vertex** — Arbitrum · Hybrid CLOB + AMM · Cross-margin everything - **Drift** — Solana · Order book · Fast Solana-native perps - **Jupiter Perps** — Solana · JLP pool · Massive Solana user base - **gTrade** — Arbitrum / Polygon · Oracle-based · Forex, stocks, commodities (290+ pairs) - **Synthetix** — Ethereum Mainnet · Synthetic · Multicollateral margin, L1 custody ## Key Takeaways - **Perpetual futures** are contracts that let you bet on price direction with leverage — they never expire and are the highest-volume instrument in crypto. - **Funding rates** keep perp prices aligned with spot through periodic payments between longs and shorts. - **Leverage is a tool, not a toy.** Most leveraged traders lose money. Respect it. - **Hyperliquid** built an entirely new blockchain just to run a trading exchange — and it worked. Sub-second execution, fully on-chain, no VC funding, community-first. - **The perp DEX landscape** is rich and diverse: dYdX, GMX, Vertex, Drift, Jupiter, gTrade, and Synthetix each bring unique innovations. - **Volume is migrating on-chain.** The FTX collapse accelerated it, and better tech is sustaining it. This trend isn't reversing. --- ## What's Next? Perps are just the beginning of on-chain derivatives. In **[Part 15](/blog/crypto-unlocked-15-options-advanced-instruments)**, we're going deeper into the derivatives rabbit hole with **options and advanced instruments** — DeFi protocols that let you trade options, structured products, and exotic derivatives without a TradFi broker. If perps are crypto's stock market, options are its weapons-grade toolkit. See you there.
← [Previous: Spot DEXs](/blog/crypto-unlocked-13-spot-dexs) · [Series Index](/blog/series/crypto-unlocked) · [Next: Options & Advanced Trading](/blog/crypto-unlocked-15-options-advanced-instruments) →
--- --- # Crypto Unlocked Part 15: Options & Advanced Trading URL: /blog/crypto-unlocked-15-options-advanced-instruments Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Options, Trading, DeFi, Beginners Series: Crypto Unlocked (Part 15 of 21) --- Calls, puts, and exotic instruments — on-chain options trading is still early but growing fast. Here's what you need to know. You've learned to swap tokens, provide liquidity, lend, borrow, and even trade perpetuals with leverage. But there's one more level to the trading game that most crypto beginners never touch — and honestly, most traditional investors don't either. **Options.** The word alone makes people's eyes glaze over. Strike prices, premiums, [Greeks](https://www.investopedia.com/terms/g/greeks.asp) — it sounds like a finance PhD entrance exam. But here's the thing: options are actually one of the most *useful* financial tools ever invented. They let you bet on prices going up, going down, or staying flat — all while knowing your maximum possible loss *before* you enter the trade. Let's break it down. ## Options 101: Insurance You Can Trade Forget the textbook definitions for a second. Think about car insurance. You pay a monthly **premium** to your insurance company. If nothing bad happens, you lose that premium — gone. But if you crash your car, the insurance kicks in and covers the damage. You paid a small, known cost to protect against a big, unknown loss. **Options work the exact same way.** A **[call option](https://www.investopedia.com/terms/c/calloption.asp)** gives you the right (but not the obligation) to *buy* an asset at a specific price before a specific date. You'd buy a call if you think the price is going up. A **[put option](https://www.investopedia.com/terms/p/putoption.asp)** gives you the right to *sell* at a specific price before a specific date. You'd buy a put if you think the price is going down — or if you want to protect a position you already hold. That's it. Calls = bullish bets. Puts = bearish bets (or insurance). ![Options payoff profiles — Long Call, Long Put, Short Call, and Short Put with strike price X and payout formulas](/assets/blog/crypto-unlocked-15/options-payoff-diagram.jpg) ## The Key Terms (No Jargon Left Behind) Let's say ETH is trading at $3,000 and you buy a call option. Here are the moving parts: - **Strike price** — The price at which you can buy (or sell). If your call has a $3,500 strike, you have the right to buy ETH at $3,500 no matter where the market goes. - **Premium** — What you pay for the option. Think of it as the ticket price. If the premium is $100, that's your maximum loss if the trade doesn't work out. - **Expiry** — The deadline. After this date, the option is worthless. Options are time-limited by design. - **In the money (ITM)** — Your option has value right now. For a call, that means the market price is *above* the strike. - **Out of the money (OTM)** — Your option has no intrinsic value yet. The market hasn't reached your strike price. > 💡 **The beauty of buying options:** your downside is capped at the premium you paid. If ETH crashes to $1,000 after you bought that $3,500 call, you only lose the $100 premium. Compare that to being leveraged long and getting liquidated for everything. ## Why Would Anyone Trade Options? Three main reasons: **1. Hedging (insurance)** You hold 10 ETH and you're worried about a crash. Instead of selling, you buy put options. If ETH drops, your puts increase in value, offsetting your losses. If ETH goes up, you only lose the premium — a small price for peace of mind. **2. Income generation** You can *sell* options (called "writing") to collect premiums. If you own ETH and sell calls against it (a "covered call"), you earn income as long as the price stays below the strike. It's like renting out your crypto. **3. Leveraged bets with capped downside** Options give you asymmetric upside. A $100 premium on a call option could turn into $1,000+ if the price rockets past your strike. Your maximum loss? That same $100. No liquidation, no margin calls. You know the worst case before you click "buy." > ⚠️ **Important caveat:** *Selling* options is a completely different risk profile. Buyers have capped losses; sellers can face theoretically unlimited losses. If you're a beginner, stick to buying options until you deeply understand the mechanics. ## On-Chain Options: The Platforms In traditional finance, options trade on regulated exchanges like the [CBOE](https://www.cboe.com/). In crypto, most options volume still happens on centralized exchanges — **[Deribit](https://www.deribit.com/)** has historically dominated with roughly 85%+ of all crypto options volume. It's been the undisputed king — so much so that [Coinbase acquired Deribit in 2025](https://www.coinbase.com/blog/coinbase-signs-agreement-to-acquire-deribit) for $2.9 billion, the largest M&A deal in crypto history. But DeFi is catching up. Here are the protocols pushing options on-chain: ![DeFi options trading interfaces](/assets/blog/crypto-unlocked-15/defi-options-trading.png) - **[Aevo](https://www.aevo.xyz/)** — Built by the team behind Ribbon Finance on a custom L2 using the [OP Stack](https://docs.optimism.io/). Clean interface, order book model with off-chain matching and on-chain settlement — feels close to a CEX experience. They've expanded into perpetuals and structured strategies too. - **[Derive](https://www.derive.xyz/)** (formerly Lyra) — One of the OG DeFi options protocols, originally launched on Optimism in 2021. Led by a former Susquehanna options trader, Derive offers institutional-grade on-chain options and perps. Rebranded from Lyra to Derive to reflect its evolution beyond a pure options AMM. - **[Premia](https://www.premia.blue/)** — Now operating as Premia Labs with their [Kyan](https://kyan.blue/) exchange for options and perpetuals with portfolio margin. Multi-chain across Arbitrum, Ethereum, and Base. They've iterated through multiple versions to improve capital efficiency and combo strategies. - **[Hegic](https://www.hegic.co/)** — A simpler approach to on-chain options on Ethereum. You pick your asset, direction, amount, and timeframe. The protocol handles the rest. Good for people who find traditional options interfaces overwhelming. ### Panoptic: The Wild Card This one deserves its own mention because it's genuinely innovative. **[Panoptic](https://panoptic.xyz/)** builds options on top of [Uniswap V3](https://uniswap.org/) liquidity positions. Here's the insight: when you provide concentrated liquidity on Uniswap V3, you're already taking on a payoff profile that looks remarkably like selling options. Panoptic formalizes this. Instead of creating separate options markets, Panoptic lets traders buy and sell options that are directly derived from Uniswap LP positions. LPs effectively become options sellers, and traders can buy those options without anyone having to write a traditional options contract. It's clever because it solves one of the biggest problems in DeFi options: **liquidity**. Instead of bootstrapping a new market from scratch, Panoptic piggybacks on Uniswap's existing deep liquidity. > 🧠 **Why this matters:** Most DeFi options protocols struggle with thin liquidity — wide spreads, bad fills, not enough counterparties. By connecting to Uniswap's liquidity, Panoptic sidesteps this cold-start problem entirely. ## Structured Products: Options on Autopilot Not everyone wants to manually pick strikes and expiries. That's where **DeFi Options Vaults (DOVs)** come in. DOVs are structured products — essentially smart contract vaults that automatically execute options strategies on your behalf. You deposit your crypto, the vault sells options (usually covered calls or cash-secured puts), collects the premiums, and distributes the yield back to depositors. **How a typical covered call vault works:** 1. You deposit ETH into the vault 2. Every week (or some interval), the vault sells call options with a strike price above the current market price 3. If ETH stays below the strike, the options expire worthless and the vault keeps the premium → that's your yield 4. If ETH rockets past the strike, you miss the upside above that level — that's the tradeoff **Ribbon Finance** was the pioneer here before winding down. **Thetanuts** and **Cega** have carried the torch. These vaults were hugely popular during calmer markets when premiums were juicy and prices weren't making violent moves. The catch? DOVs got wrecked during volatile periods. When prices blew past strike prices, vault depositors missed massive upside. And during crashes, the underlying assets lost value faster than premiums could compensate. They work best in sideways or gently trending markets. ## Prediction Markets: Betting on Reality While not technically options, **prediction markets** deserve a spot here because they're derivatives on real-world outcomes — and they've exploded in popularity. ![Polymarket prediction market interface showing 2024 US Election betting markets with odds, trading volumes, and Yes/No outcomes](/assets/blog/crypto-unlocked-15/prediction-markets.jpg) - **[Polymarket](https://polymarket.com/)** — The breakout star, self-described as "The World's Largest Prediction Market." Built on the [Polygon](https://polygon.technology/) blockchain using USDC, it lets you bet on anything from election outcomes to "Will Bitcoin hit $100K by December?" Binary outcomes, priced between $0 and $1. If you're right, you get $1 per share. If wrong, you get $0. Simple, addictive, and surprisingly informative. Founded in 2020 by Shayne Coplan, Polymarket has attracted backing from Peter Thiel's Founders Fund and Vitalik Buterin. - **[Drift](https://www.drift.trade/)** — A Solana-based exchange that offers prediction-style markets alongside perpetuals. Betting on crypto events and broader outcomes. Prediction markets are fascinating because they turn crowd wisdom into prices. When Polymarket shows a candidate at 65%, that's the market's collective bet — and historically, these markets have been *more accurate* than polls and pundits. > 🔮 **Hot take:** Prediction markets might be DeFi's best mainstream product. People who'd never touch a perpetual contract will happily bet $50 on an election outcome. It's financial infrastructure disguised as a betting app. ## The Reality Check: Why Options Are Harder On-Chain Let's be honest about where things stand. **Deribit (now under Coinbase) still dominates.** The vast majority of crypto options volume happens on this single centralized exchange. Why? Because options are complex instruments that need: - **Deep liquidity** across many different strikes and expiries (options have way more individual markets than spot or perps) - **Fast execution** — timing matters when prices move - **Sophisticated market makers** who continuously quote prices - **Low fees** — options premiums can be small, so fees need to be proportionally tiny On-chain environments struggle with all of these. Even on fast L2s, the latency and gas costs create friction. And liquidity fragmentation is brutal — each strike/expiry combination needs its own pool of liquidity. **The complexity gap is real:** - Spot trading? One market, one price. Easy. - Perpetuals? One market per asset, just with leverage. Manageable. - Options? Dozens of strikes × multiple expiry dates × calls AND puts = hundreds of individual markets *per asset*. That's a liquidity nightmare. This is why most DeFi options protocols have modest volumes compared to spot DEXs or even on-chain perps. The infrastructure is improving — better L2s, smarter AMM designs, intent-based architectures — but we're still early. **That said, "still early" in crypto often means "about to get interesting."** The protocols building now are laying the groundwork. When the infrastructure catches up (faster chains, cheaper transactions, better market-making tools), on-chain options could explode. ## Key Takeaways - **Options = insurance contracts you can trade.** Calls for upside bets, puts for downside protection. - **Your maximum loss when buying options is the premium.** No liquidation surprises. - **On-chain options exist** (Aevo, Derive, Premia, Hegic, Panoptic) but most volume is still on Deribit/Coinbase (centralized). - **DOVs automate options strategies** but come with tradeoffs — especially during volatile markets. - **Prediction markets** (Polymarket) are derivatives on real-world events and might be DeFi's best gateway drug. - **The on-chain options space is still early.** Liquidity fragmentation and complexity make it harder than spot or perps. But it's getting better fast. > 🎯 **Bottom line:** You don't *need* to trade options. Most people do fine with spot and maybe some perps. But understanding options gives you a mental framework for thinking about risk, probability, and asymmetry that makes you a better investor overall — even if you never buy a single contract. ## What's Next? You've now got the full toolkit: spot, lending, leverage, perps, options. But how do you actually *use* all this without losing your mind? In **[Part 16](/blog/crypto-unlocked-16-trading-tools)**, we'll cover **trading tools and dashboards** — the apps, aggregators, and analytics platforms that help you track positions, find opportunities, and avoid getting rekt across all these protocols. Because having the tools is one thing. Knowing where to look is everything.
← [Previous: Perpetual DEXs & Hyperliquid](/blog/crypto-unlocked-14-perpetual-dexs-hyperliquid) · [Series Index](/blog/series/crypto-unlocked) · [Next: On-Chain Trading Tools](/blog/crypto-unlocked-16-trading-tools) →
--- --- # Crypto Unlocked Part 16: On-Chain Trading Tools URL: /blog/crypto-unlocked-16-trading-tools Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Trading Tools, DeFi, Bridges, Beginners Series: Crypto Unlocked (Part 16 of 21) --- Aggregators, bridges, portfolio trackers, and MEV protection — the essential toolkit for navigating DeFi like a pro. You wouldn't show up to a construction site with just your bare hands. You'd bring tools — a drill, a level, maybe a thermos of coffee to keep you sane. DeFi is the same. The protocols we've covered in previous parts are the raw materials. Now let's talk about the **tools** that make working with them faster, cheaper, and a whole lot safer. This chapter is your DeFi toolbox. Bookmark it. You'll come back to it. ## DEX Aggregators: Your Best Friend for Swaps Remember how we talked about decentralized exchanges in [Part 10](/blog/crypto-unlocked-10-dexs-liquidity-pools)? Uniswap, SushiSwap, Curve — each has its own liquidity pools with slightly different prices. If you swap directly on one DEX, you might get a worse rate than what's available elsewhere. A **DEX aggregator** checks dozens of DEXs simultaneously and routes your trade through whichever path gets you the best price. Think of it like a flight comparison site — instead of checking every airline individually, you search once and get the cheapest option. The big names: - **[1inch](https://1inch.io)** — The OG aggregator on Ethereum and many EVM chains. Splits trades across multiple DEXs for optimal pricing. - **[Jupiter](https://jup.ag)** — The king of Solana. If you're swapping anything on Solana, Jupiter is where you go. Period. - **[CowSwap](https://swap.cow.fi)** — Uses a unique "batch auction" model that groups trades together. This gives you MEV protection built-in (more on MEV soon — it's important). - **ParaSwap** — Another strong EVM aggregator with competitive rates and a clean interface. [code block] > **Pro tip:** Always use an aggregator instead of going directly to a single DEX. The price difference can be 0.5–2% on larger trades. That's free money you're leaving on the table otherwise. ## Bridges: Crossing the Chain Divide You've got ETH on Ethereum but want to use a dApp on Arbitrum. Or you have SOL but need USDC on Base. That's where **bridges** come in — they move your assets from one blockchain to another. Popular bridges include: - **Across** — Fast and cheap, especially for Ethereum L2s. Uses an optimistic model with relayers that front you the funds. - **Stargate** — Built on [LayerZero](https://layerzero.network)'s messaging protocol. Supports a wide range of chains with unified liquidity pools. - **[Wormhole](https://wormhole.com)** — Connects a massive number of chains including Solana, Ethereum, and Cosmos ecosystems. - **[LayerZero](https://layerzero.network)** — More of an infrastructure layer. Many bridges and apps build on top of it. You'll interact with it indirectly through apps like Stargate. ### Bridge Risks: The Elephant in the Room I'm not going to sugarcoat this: bridges are one of the riskiest parts of crypto. They've been responsible for some of the biggest hacks in history: - **Ronin Bridge (2022):** $625 million stolen. Attackers compromised validator keys for the bridge connecting Axie Infinity's sidechain. The hack went unnoticed for *six days*. - **Wormhole (2022):** $320 million lost due to a smart contract vulnerability. An attacker minted wrapped ETH on Solana without actually depositing ETH on Ethereum. Why are bridges so vulnerable? Because they're essentially a giant vault sitting between two chains, and they need some mechanism to verify that a deposit on Chain A actually happened before releasing funds on Chain B. That verification layer is the attack surface. > **How to minimize bridge risk:** > - Use well-established bridges with strong track records > - Don't bridge more than you need at once > - Check if the destination chain has native on-ramps (sometimes it's cheaper and safer to just buy directly) > - Wait for transactions to fully confirm before assuming the bridge worked > - Newer "intent-based" bridges like Across tend to have smaller attack surfaces because there's no giant pool to drain ## Portfolio Trackers: Your DeFi Dashboard Once you're active across multiple chains and protocols, keeping track of everything becomes... a lot. You've got tokens in a wallet on Ethereum, an LP position on Arbitrum, some staked assets on Solana, and maybe a few NFTs you forgot about. Portfolio trackers aggregate all of it into a single dashboard: - **[DeBank](https://debank.com)** — Excellent for seeing your full DeFi portfolio across EVM chains. Shows every protocol position, token balance, and even your transaction history. The social features are a nice bonus. - **[Zapper](https://zapper.xyz)** — Clean interface, supports a huge range of protocols. Great for discovering new DeFi opportunities and tracking your net worth over time. - **Zerion** — Similar to Zapper with a polished mobile app. Also functions as a wallet. Just connect your wallet address (read-only — no signing required) and you instantly see everything you own across every chain. It's like Mint or YNAB, but for DeFi. ## Block Explorers: Reading the Blockchain Every transaction on a blockchain is public. Block explorers are the tools that make that data human-readable. - **[Etherscan](https://etherscan.io)** — The gold standard for Ethereum. Look up any wallet, transaction, or smart contract. Verify contracts, check gas costs, see token transfers. If you're on Ethereum, you'll use Etherscan constantly. Each L2 has its own variant too — Arbiscan for Arbitrum, Basescan for Base, etc. - **Solscan** — Same concept for Solana. Clean interface that shows transaction details, token accounts, and program interactions. > **Tip:** Whenever a transaction feels "off" — you got less tokens than expected, gas was weirdly high, or something just doesn't look right — paste the transaction hash into the block explorer. It tells you *exactly* what happened, down to every token transfer and contract call. Learn to read block explorers. It's like learning to read your bank statement, except it's actually transparent. ## MEV: The Invisible Tax You're Already Paying This is one of the most important topics in DeFi that most beginners have never heard of. **MEV** stands for **Maximal Extractable Value** (originally "Miner Extractable Value"), and it's the profit that can be extracted by reordering, inserting, or censoring transactions within a block. Here's the simple version: when you submit a swap on a DEX, your transaction sits in a public waiting area called the **mempool** before it gets included in a block. Bots can *see* your pending transaction and exploit it. The most common attack is the **sandwich attack:** 1. You submit a trade to buy Token X. 2. A bot sees your pending transaction and quickly buys Token X *before* you (frontrunning), pushing the price up. 3. Your trade executes at the now-higher price. 4. The bot immediately sells Token X *after* you (backrunning), pocketing the difference. You end up paying more than you should have, and the bot walks away with the profit. This happens thousands of times per day. You won't even notice — your trade still goes through, just at a slightly worse price. [code block] ### MEV Protection: Fighting Back The good news? You can protect yourself: - **[Flashbots Protect](https://www.flashbots.net)** — Instead of sending your transaction to the public mempool, Flashbots routes it through a private channel directly to block builders. Bots can't see what they can't find. Just add the Flashbots RPC to your wallet and your Ethereum transactions become invisible to sandwich bots. - **[MEV Blocker](https://mevblocker.io)** — Similar concept, backed by CoW Protocol. Sends transactions privately and even gives you a *rebate* if your transaction generates MEV — meaning you get paid instead of the bot. - **Private mempools** — Many L2s like Arbitrum process transactions in a sequencer that doesn't have a traditional public mempool, reducing MEV by design. > **My strong opinion:** If you're swapping on Ethereum mainnet, use Flashbots Protect or MEV Blocker. There's literally no downside. You're just opting out of being exploited. ## Intent-Based Trading: The Next Evolution Traditional DEX trading works like this: you craft a specific transaction ("swap exactly 1 ETH for USDC on Uniswap V3 at this pool"). You specify *how* the trade happens. **Intent-based** trading flips this: you just state *what* you want ("I want to sell 1 ETH for the best possible amount of USDC") and let specialized solvers compete to fill your order in the best way possible. - **[CowSwap](https://swap.cow.fi)** — Pioneered this model. Your trades are signed off-chain as "intents," then professional solvers batch them together and find optimal execution. Bonus: trades between CowSwap users can match directly (peer-to-peer) without touching a DEX at all, saving you gas and eliminating MEV. - **UniswapX** — Uniswap's intent-based system. Dutch auctions where fillers compete to give you the best price. If no one fills your order, it falls back to on-chain execution. Intent-based trading is still relatively new, but it's clearly the direction things are heading. Better prices, gas savings, and built-in MEV protection — hard to argue with that. ## Analytics: Data-Driven Decisions Want to know which protocols are actually growing? Where the real TVL is? Which chains are gaining users? These platforms turn raw blockchain data into actionable insights: - **[DefiLlama](https://defillama.com)** — The best free dashboard for tracking Total Value Locked (TVL) across every protocol and chain. Want to compare Aave's TVL on Ethereum vs. Arbitrum? DefiLlama. Want to see which L2 is growing fastest? DefiLlama. Bookmark it. - **[Dune Analytics](https://dune.com)** — Community-built dashboards using SQL queries on blockchain data. If you can think of a question, someone's probably already built a Dune dashboard for it. And if they haven't, you can build your own. - **[Token Terminal](https://tokenterminal.com)** — Focuses on protocol *revenue* and financial metrics. Because TVL is vanity — revenue is sanity. Great for evaluating whether a protocol's token is actually capturing value. ## Gas Trackers and Optimization Gas fees are the cost of doing business on-chain, but they fluctuate wildly. On Ethereum, a swap might cost $2 at 3 AM UTC and $30 during peak hours. **Gas optimization basics:** - **Time your transactions.** Use a gas tracker (Etherscan has one built in, or check ultrasound.money) to see current gas prices. Weekends and early mornings (UTC) tend to be cheapest. - **Set reasonable gas limits.** Don't overpay by using wallet defaults on every transaction. Most wallets let you customize. - **Use L2s for small transactions.** If you're swapping $100 worth of tokens, paying $15 in gas on mainnet Ethereum is absurd. Use Arbitrum, Base, or Optimism where gas costs pennies. - **Batch transactions when possible.** Some protocols let you claim rewards and reinvest in a single transaction instead of two separate ones. > **Reality check:** Gas optimization matters most on Ethereum mainnet. On Solana, gas is fractions of a cent. On L2s, it's usually under $0.10. Don't waste an hour optimizing a $0.03 fee — focus your energy on the chains where it actually makes a difference. ## Putting It All Together Here's what a well-equipped DeFi toolkit looks like in practice: 1. **Find opportunities** — DefiLlama, Dune, Token Terminal 2. **Move assets** — Across, Stargate for bridging 3. **Execute trades** — Jupiter (Solana), 1inch or CowSwap (EVM) through aggregators 4. **Protect yourself** — Flashbots Protect or MEV Blocker enabled in your wallet 5. **Track everything** — DeBank or Zapper for portfolio overview 6. **Verify transactions** — Etherscan, Solscan when something looks off None of these tools require you to deposit funds or give up custody. They're all interfaces to the same open, permissionless infrastructure. That's the beauty of DeFi — you can switch tools anytime, and your assets stay in your wallet. ## What's Next We've covered how to trade, lend, provide liquidity, and now how to do all of it efficiently with the right tools. But crypto isn't just about finance. In **[Part 17](/blog/crypto-unlocked-17-web3-ownership-internet)**, we'll explore the broader world of **Web3** — decentralized identity, social networks, gaming, and the applications being built on top of blockchain technology that have nothing to do with trading. The financial layer is just the beginning.
← [Previous: Options & Advanced Trading](/blog/crypto-unlocked-15-options-advanced-instruments) · [Series Index](/blog/series/crypto-unlocked) · [Next: Web3 — The Ownership Internet](/blog/crypto-unlocked-17-web3-ownership-internet) →
--- --- # Crypto Unlocked Part 17: Web3 — The Ownership Internet URL: /blog/crypto-unlocked-17-web3-ownership-internet Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Web3, DAOs, Governance, Beginners Series: Crypto Unlocked (Part 17 of 21) --- Web1 was read. Web2 was read-write. Web3 is read-write-own. DAOs, decentralized identity, and why digital ownership changes everything. Imagine you've spent ten years building a following on Instagram. A million followers. Brand deals. A real business. Then one morning you wake up, open the app, and your account is gone. Disabled. No warning, no appeal, no recourse. A decade of work — wiped out by a content moderation algorithm that flagged the wrong post. This isn't hypothetical. It happens constantly. And it reveals something uncomfortable about the internet we've all been using: **you don't actually own anything on it.** Web3 is the movement trying to change that. Let's talk about what it means — and whether it's actually working. ## The Three Eras of the Internet [code block] People love neat narratives, and the Web1 → Web2 → Web3 framework is almost too clean. But it's useful: - **Web1 (1990s–2004):** Read-only. Static pages. You consumed content that someone else published. Think GeoCities, early Yahoo, encyclopedia sites. The internet was a digital library. - **Web2 (2004–present):** Read-write. User-generated content. You could publish, comment, share, create. Think YouTube, Instagram, Twitter, TikTok. The internet became a platform. - **Web3 (emerging):** Read-write-own. You can create *and* own what you create — your content, your identity, your data, your relationships with your audience. The internet becomes infrastructure you have a stake in. The key shift isn't technological. It's about **who controls the value you create**. In Web2, you create the content, but the platform captures the value. Your tweets make Twitter valuable. Your videos make YouTube valuable. Your photos make Instagram valuable. But you can't take your followers to a competitor. You can't Web3 says: what if the users owned the platforms themselves? ## The Ownership Problem Is Real Before we get into solutions, let's be honest about the problem. It's bigger than most people realize: - **Your Spotify playlists?** Spotify's, not yours. They can remove songs, change the interface, or shut down entirely. You're renting access to music. - **Your Steam game library?** You own licenses, not games. Valve can revoke access at any time. You spent $3,000 on games you can never resell. - **Your Instagram followers?** Instagram's. Get banned and they're gone. You can't email them, you can't migrate them. - **Your Google Docs?** Google's servers, Google's rules. They've locked people out of their entire digital lives over automated policy violations. This isn't a conspiracy — it's the business model. Free services need to monetize somehow, and that means **you are the product**, not the customer. Your data, your attention, your social graph — all owned by corporations that answer to shareholders, not users. > **The core insight of Web3:** If blockchains let us own digital money without a bank, why can't they let us own digital *everything* without a platform? ## DAOs: Internet-Native Organizations One of the most ambitious ideas to come out of Web3 is the **DAO** — a Decentralized Autonomous Organization. Think of it as a company that runs on code instead of corporate law. Here's how a traditional company works: shareholders elect a board, the board hires executives, executives make decisions. Information flows through hierarchies. If you own stock in Apple, you technically have a vote — but good luck influencing anything. A DAO flips this. Token holders vote directly on proposals. The rules are encoded in smart contracts. The treasury is on-chain and transparent. There's no CEO with a corner office — there's a community with a group chat. **Some notable DAOs:** - **[MakerDAO](https://vote.makerdao.com)** — Governs the DAI stablecoin (now rebranded as part of the [Sky ecosystem](https://sky.money), with DAI becoming USDS and MKR becoming SKY). Token holders vote on risk parameters like collateral types and interest rates. It's essentially a decentralized central bank with billions in assets. - **[Uniswap Governance](https://app.uniswap.org/vote)** — The largest decentralized exchange is governed by UNI token holders who vote on fee structures, treasury spending, and protocol upgrades. - **[Nouns DAO](https://nouns.wtf)** — One NFT is auctioned every single day. Each Noun NFT gives you one vote in the DAO's treasury, which funds public goods and creative projects. It's like a weird, wonderful art collective with a massive war chest. - **[ConstitutionDAO](https://en.wikipedia.org/wiki/ConstitutionDAO)** — In November 2021, thousands of strangers on the internet pooled $47 million in ETH to try to buy an original copy of the U.S. Constitution at a Sotheby's auction. They were outbid at $43.2 million — the organizers couldn't go higher because they needed funds to insure, store, and transport the document. The DAO disbanded shortly after, but proved that internet strangers could coordinate capital at scale in days. > **Think of DAOs like this:** If a subreddit could have a bank account and legally binding votes, you'd have something close to a DAO. ### How Governance Actually Works ![DAO governance dashboard showing proposals, voting, and treasury management](/assets/blog/crypto-unlocked-17/dao-governance-interface.png) DAO governance typically follows this pattern: 1. **Someone writes a proposal** — "Let's spend $500K from the treasury to fund developer grants" 2. **Community discusses it** — Usually on a forum (Discourse, Commonwealth) before it goes to a vote 3. **Token holders vote** — One token usually equals one vote. Voting happens on-chain or through [Snapshot](https://snapshot.org) voting (off-chain but verifiable) 4. **If it passes, code executes** — Smart contracts can automatically release funds, change parameters, or trigger actions There's also **delegation** — you can delegate your voting power to someone you trust, similar to representative democracy. Don't have time to read every proposal? Delegate to a community member who does. It's messy, slow, and sometimes dysfunctional — just like regular democracy. But it's transparent in ways traditional governance never is. Every vote, every treasury movement, every decision is public and auditable. ## Decentralized Identity: You Are Your Wallet In Web2, your identity is scattered across dozens of platforms. Your LinkedIn profile, your Twitter handle, your email address — all controlled by different companies, none portable. Web3 introduces the idea of **self-sovereign identity**. Your wallet address becomes your universal login. But raw wallet addresses (0x7a3F...9b2E) aren't exactly user-friendly, which is where projects like **ENS** come in. [code block] **[ENS (Ethereum Name Service)](https://ens.domains)** lets you register a human-readable name like `yourname.eth`. It works like a domain name for your wallet — and for your identity. You can attach your website, social profiles, email, and more to a single ENS name that **you** own. No company can take it away. As of today, over 600,000 owners hold more than 1.3 million ENS names, with integrations across wallets like Coinbase Wallet, Rainbow, and browsers like Brave. Beyond names, there's the concept of **on-chain reputation**. Your wallet's history tells a story: - Which protocols have you used? - How long have you been active? - Have you participated in governance? - What communities are you part of? Projects are experimenting with **soulbound tokens** — non-transferable NFTs that represent credentials, memberships, or achievements. Think of them as on-chain badges that prove you attended an event, completed a course, or contributed to a project. A resume that can't be faked. ## Social Protocols: Decentralized Social Media If platforms own your social graph, the obvious Web3 answer is: build social networks where users own their accounts, followers, and content. **[Farcaster](https://farcaster.xyz)** is the most interesting attempt so far. It's a decentralized social protocol (think: decentralized Twitter) where: - Your identity is tied to your wallet, not an email address - Your social graph lives on a decentralized network, not a company's server - Any developer can build a client (app) on top of the protocol — the most popular being [Warpcast](https://warpcast.com) - If you don't like one app, switch to another — your followers come with you **[Lens Protocol](https://lens.xyz)** has evolved from a social graph into a full SocialFi platform — a dedicated chain (built on ZKSync and Avail) with social primitives baked in. Accounts, usernames, graphs, feeds, and groups are all on-chain building blocks that any developer can plug into. Users get gasless transactions settled in GHO, and can switch between apps while keeping their data portable. These platforms are still early and niche. But the idea is powerful: **what if switching social networks was as easy as switching email clients?** Your data stays yours; the apps just provide different interfaces. ## Where Web3 Data Actually Lives A lot of early Web3 NFT artwork and dApp data still lives on regular old servers. If that server goes down, your "decentralized" asset points to a dead link. Not exactly the revolution. That's where decentralized storage comes in: - **[IPFS (InterPlanetary File System)](https://ipfs.tech)** — A peer-to-peer network where files are addressed by their content hash, not their location. Instead of "this file lives at amazon.com/server5/image.jpg," it's "this file has fingerprint QmX7b3...". Anyone hosting a copy can serve it. Data is open, verifiable, and resilient — if one node goes offline, others can still serve the content. - **[Arweave](https://www.arweave.org)** — Permanent storage. You pay once, and your data is stored forever (in theory) across a decentralized network. As the project describes itself: "like Bitcoin, but for data." Think of it as the Internet Archive, but trustless. - **[Filecoin](https://filecoin.io)** — A marketplace for storage. People with spare hard drive space rent it out; people who need storage pay for it. Supply and demand for data hosting. These aren't just crypto curiosities. If Web3 is about ownership, you need somewhere to store what you own that isn't controlled by Amazon, Google, or Microsoft. ## The Creator Economy, Reimagined This is where Web3 gets genuinely exciting — and where it's already showing real results. In Web2, the creator economy has a middleman problem: - Musicians get fractions of a cent per Spotify stream - Writers depend on platform algorithms for visibility - Artists sell through galleries that take 50% commissions - Everyone relies on platforms that can change the rules overnight Web3 offers a different model. Smart contracts enable **direct relationships** between creators and fans: - Musicians can sell music NFTs directly, with smart contracts that pay royalties on every resale — forever - Writers can tokenize their work, giving supporters ownership stakes in the content they fund - Artists can sell directly to collectors, with programmable royalties built into the contract - Creators can issue tokens to their most engaged fans, creating community ownership The creator doesn't need to trust a platform to distribute revenue fairly — the code does it automatically. And because the fan relationship is on-chain, no platform can take it away. ## The Skeptic's View Alright, let's be honest. Web3 has real problems, and ignoring them doesn't help anyone. **The complexity argument.** Try explaining to your parents how to set up a MetaMask wallet, bridge ETH to an L2, connect to a dApp, and vote on a DAO proposal. Web2 won because it was *easier* than what came before. Web3 is currently harder. That's a real barrier. **The "solution looking for a problem" critique.** Do most people actually care about owning their social graph? Or do they just want an app that works? For the average user, Instagram's convenience beats Farcaster's philosophy every day of the week. **Governance fatigue.** Most DAO token holders don't vote. Voter turnout is often below 10%. The people who *do* participate tend to be whales (large token holders), which means "decentralized governance" can look a lot like plutocracy — rule by the richest. **Speculation dominance.** Too much of Web3 is still driven by speculation rather than utility. When most people buy governance tokens, they're hoping the price goes up — not planning to vote on proposal #247 about treasury diversification. These are fair critiques. But here's the thing: the early internet had the same problems. Email was confusing. Websites looked terrible. Skeptics called it a fad. The technology matured, the UX improved, and the use cases became undeniable. Web3 might follow the same path — or it might not. The honest answer is that we don't know yet. But the underlying idea — that users should own their digital lives — is hard to argue against. The question is whether blockchain is the right tool to get there. > **My take:** Web3's vision is right. The execution is still catching up. The projects that win will be the ones where users don't even realize they're using a blockchain — they just notice that things work better and they have more control. ## Key Takeaways - **Web3 = read-write-own.** The evolution from consuming content to creating it to *owning* it - **DAOs** are internet-native organizations governed by token holders through on-chain voting - **Decentralized identity** (ENS, soulbound tokens) gives you a portable, self-sovereign digital identity - **Social protocols** like Farcaster aim to let users own their social graph across applications - **Decentralized storage** (IPFS, Arweave, Filecoin) ensures Web3 data isn't dependent on centralized servers - **The creator economy** benefits enormously from direct, programmable relationships between creators and fans - **The challenges are real** — UX complexity, governance apathy, and speculation still dominate the space ## What's Next We've covered the theory — blockchains, DeFi, NFTs, and now Web3. But what does all this look like in practice? In **[Part 18](/blog/crypto-unlocked-18-real-world-applications)**, we'll explore **real-world applications** — the places where crypto is already making a tangible difference outside of trading and speculation. Supply chains, remittances, digital identity in developing nations, and more. The rubber meets the road.
← [Previous: On-Chain Trading Tools](/blog/crypto-unlocked-16-trading-tools) · [Series Index](/blog/series/crypto-unlocked) · [Next: Real-World Applications](/blog/crypto-unlocked-18-real-world-applications) →
--- --- # Crypto Unlocked Part 18: Real-World Applications URL: /blog/crypto-unlocked-18-real-world-applications Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, RWA, DePIN, Gaming, AI, Beginners Series: Crypto Unlocked (Part 18 of 21) --- Beyond speculation — how crypto is being used for real-world assets, physical infrastructure, gaming, and the convergence of AI and blockchain. Here's the uncomfortable truth about crypto: for most of its existence, the killer app has been... trading more crypto. Tokens that exist to be swapped for other tokens. Yields that come from other people's deposits. A financial ouroboros eating its own tail. But something shifted. Quietly, while everyone was arguing about memecoins and ETF flows, crypto started doing *real things*. Boring things. Useful things. The kind of things that don't make headlines but actually matter. Let's talk about what crypto looks like when it grows up. ## Real-World Assets: Wall Street Comes On-Chain **RWA (Real-World Asset) tokenization** is exactly what it sounds like — taking things that exist in the real world (stocks, bonds, real estate, commodities) and representing them as tokens on a blockchain. Think of it like this: you know how your stock portfolio is really just numbers in a brokerage database? Tokenization puts those numbers on a blockchain instead, where they can move 24/7, settle instantly, and be sliced into tiny pieces. Why does this matter? - **Fractional ownership.** Can't afford a $500K commercial property? Buy $500 worth of tokens representing a share of it. - **Instant settlement.** Traditional stock trades take T+1 (one business day) to settle. On-chain? Minutes. - **Global access.** A farmer in Nigeria can invest in US Treasury bonds without a Goldman Sachs account. [code block] ![RWA tokenization: traditional assets flow through blockchain to become accessible digital tokens](/assets/blog/crypto-unlocked-18/rwa-tokenization-flow.png) The biggest signal that this is real? **BlackRock** — the world's largest asset manager with over $10 trillion under management — launched [BUIDL](https://securitize.io/buidl), a tokenized US Treasury fund on Ethereum in March 2024. It quickly became the largest tokenized fund, surpassing $500 million in assets within months. When Larry Fink goes on CNBC and calls tokenization "the next generation for markets," you pay attention. This isn't crypto people talking to crypto people anymore. This is Wall Street rebuilding its plumbing. > **The big picture:** By late 2025, over $12 billion in real-world assets were tokenized on-chain (track it live on [RWA.xyz](https://rwa.xyz/)). That's a rounding error compared to traditional finance — which is exactly why the growth runway is enormous. ## Stablecoin Payments: Crypto's Trojan Horse Remember stablecoins from [Part 7](/blog/crypto-unlocked-07-tokens-and-standards)? Turns out they're the single most useful thing crypto has produced. Stablecoin transfer volume has **surpassed Visa's annual payment volume** in raw value moved. Let that sink in. Now, a caveat: stablecoin volume includes DeFi transfers and trading settlement, not just payments — so it's not a perfect apples-to-apples comparison. But even adjusted for that, the sheer scale of value flowing through stablecoin rails is staggering. Here's why: - **Cross-border payments.** Sending $10,000 from the US to the Philippines through traditional banking takes 3-5 days and costs $200-500 in fees. With USDC on a modern blockchain? Minutes. Pennies. - **Dollar access.** For billions of people in countries with unstable currencies, stablecoins are the easiest way to hold dollars. No bank account needed. Just a phone. - **Business payments.** Companies are increasingly settling invoices in stablecoins because it's faster and cheaper than wire transfers. This isn't theoretical. Stripe integrated stablecoin payments. PayPal launched its own stablecoin. Circle (USDC issuer) processes billions in weekly volume. The boring plumbing of global commerce is being rewired, and most people using it don't even know they're "using crypto." > **Hot take:** Stablecoins will onboard more people to crypto than any DeFi protocol, NFT collection, or memecoin ever will. Most of them won't even realize they're using blockchain. And that's the point. ## DePIN: The Physical World Goes Decentralized **DePIN** stands for **Decentralized Physical Infrastructure Networks**. Yes, it's a mouthful. But the concept is simple: instead of one company building and owning infrastructure, thousands of individuals contribute their resources and get paid in tokens. Think of it as the Airbnb model applied to... everything: ![A global mesh of DePIN nodes — wireless hotspots, GPUs, and dashcams contributing to decentralized infrastructure](/assets/blog/crypto-unlocked-18/depin-global-network.png) - **[Helium](https://www.helium.com/)** — People set up small wireless hotspots in their homes and businesses. Together, they've built a people-powered wireless network with global coverage. T-Mobile partnered with them to fill coverage gaps. Instead of one telecom spending billions on towers, thousands of people earn HNT tokens for providing connectivity. - **[Render Network](https://rendernetwork.com/)** — Need GPU power for 3D rendering or AI training? Instead of renting from AWS, Render connects you to a distributed network of GPU owners. It now integrates with leading generative AI tools from Runway, Black Forest Labs, and Stability AI — making it a full-stack platform for next-gen digital creation. People with idle graphics cards earn RENDER tokens by lending their compute power. - **[Hivemapper](https://hivemapper.com/)** — Dashcam owners contribute street-level imagery as they drive. Together, they're building a constantly-updated map of the world in real time, competing with Google Maps. Contributors earn HONEY tokens, with a burn-and-mint model that keeps the economics sustainable. The pattern is always the same: **a token incentivizes people to build something that would normally require a massive corporation**. It's crowd-sourced infrastructure with built-in economic incentives. Does every DePIN project work? No. Many are overhyped. But the model — using tokens to bootstrap real physical networks — is genuinely powerful. ## Supply Chain: From Factory to Shelf Here's a problem that's existed forever: how do you *really* know where your stuff comes from? That "organic" coffee — is it actually organic? Those "ethically sourced" diamonds — who verified that? That pharmaceutical — is it counterfeit? Blockchain gives you an **immutable trail of custody**. Every step from factory to shelf gets recorded on-chain, and nobody can alter the history after the fact. - **VeChain** works with luxury brands to verify authenticity (no more fake handbags) - **IBM Food Trust** (built on Hyperledger) tracks food from farm to supermarket — when there's an E. coli outbreak, they can trace the source in seconds instead of weeks - **Pharmaceutical tracking** ensures drugs aren't counterfeit — a massive problem in developing countries This isn't the sexiest use case, but it might be one of the most impactful. Supply chain transparency saves lives. ## Gaming and GameFi: Learning From Failure Let's be honest about gaming and crypto — the first attempt was mostly a disaster. **Axie Infinity** was the poster child of "play-to-earn" gaming. At its peak, people in the Philippines were earning a living playing a Pokémon-style game. Then the economy collapsed. Turns out, a game where new players fund existing players' earnings is just a Ponzi scheme with cute monsters. **The lesson:** You can't build a game where the primary motivation is making money. Fun has to come first. The second wave is more promising: - **[Immutable](https://www.immutable.com/)** provides the infrastructure for blockchain gaming — fast, gas-free NFT trades for in-game items, with over $2 billion in funding flowing to games building on the platform. Games like Gods Unchained and Illuvium are built on it. - **True ownership** of in-game assets means you can sell your sword or skin on any marketplace, not just the game's own shop. You *actually* own your stuff. - **Interoperability** (eventually) could let you use items across different games — though we're far from this reality. > **Reality check:** No blockchain game has come close to competing with traditional games on gameplay quality. The tech adds real benefits (ownership, open economies), but until a genuinely fun AAA game ships with blockchain under the hood, mass adoption remains a dream. ## AI × Crypto: The Convergence Nobody Expected This is where things get weird — and fascinating. AI needs three things: **compute, data, and coordination**. Crypto is surprisingly good at all three. **Decentralized compute:** - **[Akash Network](https://akash.network/)** is like an Airbnb for cloud computing. Anyone with spare servers can rent them out, and AI developers get cheaper GPU compute than AWS — purpose-built for AI workloads like model training, inference, and large-scale data processing. - **[Render](https://rendernetwork.com/)** (yes, the same one from DePIN) also serves AI workloads — GPU compute is GPU compute, whether you're rendering 3D scenes or training models. **AI agents with wallets:** This one is mind-bending. Imagine an AI assistant that can *actually pay for things*. Not "click buy for you" — literally has its own crypto wallet, manages its own budget, pays for API calls, hires other AI agents. Crypto gives AI economic autonomy because blockchains don't care if the user is human or machine. **Data marketplaces:** AI models need training data. Blockchain-based marketplaces like **[Ocean Protocol](https://oceanprotocol.com/)** let data owners sell access to their datasets without losing control of the underlying data. You monetize your data while maintaining privacy. > **My prediction:** The AI × crypto intersection will be the defining narrative of the next cycle. Not because of hype — because AI genuinely needs decentralized infrastructure, and crypto genuinely needs useful applications. They solve each other's problems. ## Music, Entertainment, and Creator Economics Artists have been getting screwed by the music industry for decades. Streaming pays fractions of a cent per play. Labels take the lion's share. Creators own nothing. Crypto offers an alternative: - **[Royal.io](https://royal.io/)** pioneered letting fans buy shares of songs — literally owning a percentage of streaming royalties. When the song earns money, token holders get paid. The platform has since transitioned to a legacy model, with royalty claims handled through its [LDA portal](https://lda.royal.io/). - **Sound.xyz** let artists release music as limited-edition collectibles, with fans getting exclusive access and artists getting paid directly. The platform [shut down in January 2026](https://www.sound.xyz/) to focus on its successor, [Vault](https://vault.fm) — but everything collectors owned remains on-chain and accessible. A reminder that on-chain ownership outlives the platforms built on top of it. It's early, and this space is still evolving — platforms come and go, but the principle matters: **creators can monetize directly, without intermediaries taking 80% of the value**. That's a big deal. ## Insurance, Identity, and the Long Tail A few more sectors where crypto is making quiet progress: **Decentralized Insurance:** - **[Nexus Mutual](https://nexusmutual.io/)** lets people pool funds to insure against smart contract hacks, custody failures, depegs, and slashing events. No insurance company needed — the community decides on claims. With over $6 billion in crypto protected and 10,000+ covers provided, it's the leading on-chain insurance alternative. **Identity and Credentials:** - **Verifiable credentials** on-chain mean your university degree, professional certification, or proof of age can be cryptographically verified without calling the issuing institution. - **Proof of Humanity** and similar protocols establish that you're a real person — increasingly important in a world of AI-generated everything. - **Self-sovereign identity** means *you* control your data, not Facebook or Google. These aren't sexy. They won't pump your bags. But they're the infrastructure of a more open internet. ## The Adoption Curve: Where Are We Really? Let's zoom out. After 15+ years, where does crypto actually stand? **What's working:** - Stablecoins (massive, growing, genuinely useful) - RWA tokenization (institutional adoption accelerating) - DePIN (real networks, real users, real revenue) - Cross-border payments (life-changing for developing nations) **What's promising but early:** - Gaming (right model, needs better games) - AI × crypto (logical convergence, mostly experimental) - Creator economics (real value prop, small scale) **What's still mostly hype:** - "Everything will be tokenized" maximalism - Metaverse/virtual real estate - Most governance tokens ![Where different crypto use cases sit on the technology adoption curve](/assets/blog/crypto-unlocked-18/crypto-adoption-curve.jpg) If I had to place crypto on the classic technology adoption curve, I'd say we're at the **early majority** phase for stablecoins and payments, but still in **early adopter** territory for almost everything else. The gap between "this could change the world" and "this has changed the world" is still wide for most use cases. But the direction is clear. The question isn't *whether* crypto will have real-world applications — it already does. The question is how fast the rest catches up. --- ## What's Next In **[Part 19](/blog/crypto-unlocked-19-2025-2026-landscape)**, we'll survey the **2025-2026 crypto landscape** — what's changed, what's emerging, and what the current cycle looks like. ETF impacts, regulatory shifts, new narratives, and where the smart money is flowing. A snapshot of crypto right now. *See you there.* 🔭
← [Previous: Web3 — The Ownership Internet](/blog/crypto-unlocked-17-web3-ownership-internet) · [Series Index](/blog/series/crypto-unlocked) · [Next: The 2025-2026 Landscape](/blog/crypto-unlocked-19-2025-2026-landscape) →
--- --- # Crypto Unlocked Part 19: The 2025-2026 Landscape URL: /blog/crypto-unlocked-19-2025-2026-landscape Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Bitcoin ETF, Ethereum, 2026, Beginners Series: Crypto Unlocked (Part 19 of 21) --- Bitcoin ETFs, institutional adoption, Ethereum's roadmap, Solana's comeback, and Hyperliquid's rise — where crypto stands right now and what's coming next. Two years ago, if you told a Wall Street banker that BlackRock would be *begging* the SEC for a Bitcoin ETF, they'd have laughed you out of the room. If you told them Bitcoin would crack $100,000, they might have called security. And yet — here we are. The 2024-2025 stretch has been the most transformative period in crypto's short history. Not because of any single technology breakthrough, but because the *entire narrative changed*. Crypto stopped being the weird internet money your nephew wouldn't shut up about at Thanksgiving. It became a legitimate asset class that pension funds allocate to and presidents campaign on. Let's walk through everything that happened, what it means, and where we're headed. ## The ETF Big Bang [January 10, 2024](https://www.sec.gov/news/statement/gensler-statement-spot-bitcoin-011023) was crypto's moon landing moment. After a decade of rejections, the SEC finally approved spot Bitcoin ETFs — and not just one. [Eleven launched simultaneously](https://www.bloomberg.com/news/articles/2024-01-10/sec-approves-first-us-spot-bitcoin-etfs), with heavyweights like BlackRock ([iShares Bitcoin Trust](https://www.ishares.com/us/products/333011/ishares-bitcoin-trust), ticker: IBIT) and Fidelity (Wise Origin Bitcoin Fund) leading the charge. ![Bitcoin spot ETF inflows reshaped institutional crypto adoption in 2024](/assets/blog/crypto-unlocked-19/bitcoin-etf-inflows.jpg) The numbers were staggering: - **BlackRock's IBIT** became the [fastest ETF in history to reach $10 billion in assets](https://www.etf.com/sections/news/ibit-fastest-etf-ever-reach-10-billion-aum) — doing so in roughly 49 days - Within months, Bitcoin ETFs were pulling in more daily inflows than gold ETFs - By the end of 2024, these products held over **$100 billion** in Bitcoin collectively Why does this matter so much? Because an ETF is a wrapper. It lets your retirement account, your financial advisor, your grandma's brokerage — anyone with a traditional investment account — buy Bitcoin without dealing with wallets, seed phrases, or exchanges. It removed the last excuse institutional money had for staying on the sidelines. > **Think of it this way:** Before ETFs, buying Bitcoin was like buying a house directly — inspections, paperwork, risk of getting scammed. The ETF turned it into buying a REIT: same exposure, fraction of the hassle. ## Bitcoin Breaks $100K With institutional money pouring in and the [April 2024 halving](https://www.coindesk.com/tech/2024/04/20/bitcoin-halving-what-you-need-to-know/) cutting new supply in half, Bitcoin did what many thought was inevitable but still felt surreal: on **December 5, 2024**, it crossed **$100,000** for the first time. ![Bitcoin crossing $100,000 — a psychological milestone that made global headlines](/assets/blog/crypto-unlocked-19/bitcoin-100k-milestone.jpg) This wasn't just a number. It was psychological. Six figures. The kind of price tag that makes front-page news, that gets your non-crypto coworkers asking questions at lunch, that makes politicians pay attention. And pay attention they did. The 2024 US presidential election saw candidates from both parties publicly courting crypto voters. Bitcoin went from regulatory threat to political talking point seemingly overnight. ## Ethereum Gets Its ETF (Sort Of) Ethereum spot ETFs followed in mid-2024 — the SEC [approved the key 19b-4 filings in May](https://www.sec.gov/news/statement/crenshaw-statement-spot-ether-052324), with products beginning to trade in July. But there was a notable asterisk: **no staking**. The SEC approved the products but required issuers to hold plain ETH — no earning yield through staking. This matters because staking is a core part of Ethereum's value proposition (we covered this in [Part 9](/blog/crypto-unlocked-09-defi-fundamentals)). Without it, Ethereum ETFs were a bit like buying a rental property but being told you can't collect rent. Still valuable for exposure, but leaving money on the table. The staking question remains one of the biggest regulatory battles heading into 2026. If it gets resolved, expect a flood of new capital into ETH products. ## Ethereum's Grand Roadmap Speaking of Ethereum — it's been quietly executing one of the most ambitious technical roadmaps in software history. Vitalik Buterin laid it out in five phases, each with a delightfully nerdy name: - **The Surge** — Massive scaling through rollups and sharding. The star here is **[EIP-4844](https://www.eip4844.com/)** (Proto-Danksharding), which went live on March 13, 2024 as part of the Dencun upgrade and slashed Layer 2 transaction costs by up to 90%. If you used Base or Arbitrum recently and noticed fees measured in fractions of a cent — that's EIP-4844 at work. - **The Scourge** — Tackling centralization risks, particularly around MEV (the value that validators can extract by reordering transactions — think of it as cutting in line at the stock exchange). - **The Verge** — Making it possible to verify the entire Ethereum state without running a massive node. The goal: you could validate the chain from your phone. - **The Purge** — Cleaning up old data requirements so running a node doesn't need a data center. - **The Splurge** — Everything else. The polish. Full Danksharding — the complete vision for Ethereum's data availability layer — is still being developed, but proto-danksharding already proved the concept works. Ethereum is becoming the settlement layer that L2s build on, and the roadmap is making that vision cheaper and more accessible with every upgrade. ## Solana's Phoenix Moment If there's a comeback story in crypto, it's Solana. Rewind to late 2022: FTX collapsed, and Solana was caught in the blast radius. Sam Bankman-Fried's empire had been one of Solana's biggest backers. SOL crashed from $260 to under $10. Obituaries were written. "Solana is dead" became a meme. Fast forward to 2025: SOL is back in the **top 5** by market cap. What happened? - **The tech held up.** While the network had reliability issues early on, uptime improved dramatically through 2024-2025. Firedancer — a new validator client built by Jump Crypto — brought serious performance upgrades. - **DeFi exploded.** Protocols like Jupiter, Marinade, and Raydium turned Solana into a DeFi powerhouse with actual usage, not just speculation. - **Institutional interest returned.** Multiple asset managers filed for Solana ETFs in 2024, and the ecosystem attracted serious venture capital again. - **The memecoin machine.** Love it or hate it (more on this below), Solana became *the* chain for memecoin launches, bringing millions of new users into the ecosystem. Solana bet on speed and low cost from day one. That bet is paying off. ## Hyperliquid: The Exchange That Came From Nowhere One of the most fascinating stories of 2024-2025 is **[Hyperliquid](https://app.hyperliquid.xyz/)** — a perpetual futures exchange built on its own Layer 1 blockchain that went from relative obscurity to doing **billions in daily trading volume**. What makes Hyperliquid special: - **On-chain order book.** Unlike most DeFi exchanges that use automated market makers, Hyperliquid runs a full order book — the same model as traditional exchanges like the NYSE — but fully on-chain. - **Speed.** Sub-second finality. It feels like using a centralized exchange but with the transparency and self-custody of DeFi. - **The $HYPE token launch.** In late 2024, Hyperliquid airdropped its native token to users. No VC allocation. No insider deals. Just a massive distribution to actual users. $HYPE became one of the top-performing tokens of the cycle. Hyperliquid showed that DeFi can compete with centralized exchanges on performance, not just ideology. That's a big deal. ## The L2 Explosion Remember when launching a blockchain was a multi-year, multi-million dollar endeavor? Those days are gone. 2024-2025 saw an explosion of **Layer 2 networks** — chains that settle on Ethereum but handle transactions on their own: ![The Ethereum L2 ecosystem — dozens of rollups all settling back to Ethereum for security](/assets/blog/crypto-unlocked-19/l2-ecosystem-map.jpg) - **Base** (by Coinbase) — became a DeFi and social hub almost overnight - **Blast** — attracted billions with its native yield offering - **Mode, Manta, Scroll, Linea** — each carving out niches - **OP Stack and Arbitrum Orbit** — frameworks that let anyone spin up an L2 in weeks You can track all of them — TVL, risk assessments, technology choices — on [L2Beat](https://l2beat.com/scaling/summary), the go-to dashboard for the L2 ecosystem. The running joke became "everyone and their dog is launching a chain." And honestly? That's kind of the point. The vision is a future where applications run on purpose-built chains that all communicate with each other, settling back to Ethereum for security. Whether we need hundreds of L2s is debatable. But the infrastructure to build them is now commoditized, and that's genuinely exciting. ## Stablecoin Regulation Is Here Stablecoins — crypto tokens pegged to fiat currencies like the dollar — have quietly become crypto's killer app. Tether (USDT) and Circle's USDC process more value transfer than many traditional payment networks. Regulators finally noticed: - **[MiCA (Markets in Crypto-Assets)](https://www.esma.europa.eu/esmas-activities/digital-finance-and-innovation/markets-crypto-assets-regulation-mica)** rolled out in two phases: stablecoin rules (asset-referenced tokens and e-money tokens) took effect on June 30, 2024, and the full framework — covering all crypto-asset service providers — became [fully applicable on December 30, 2024](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32023R1114). Europe now has the clearest regulatory framework in the world. - **The US** has been working on stablecoin-specific legislation, with bipartisan support in Congress. The debate centers on who gets to issue them (banks only? fintech companies too?) and what reserves they need to hold. > **Why stablecoin regulation matters for everyone:** It's the bridge between crypto and traditional finance. Clear rules mean more banks can offer crypto services, more businesses can accept stablecoins, and more people can use them for everyday payments. Boring? Yes. Important? Massively. ## The Institutional Tsunami The ETFs were just the beginning. The broader institutional adoption story of 2024-2025 includes: - **[Strategy](https://www.strategy.com/)** (formerly MicroStrategy) continued stacking Bitcoin aggressively, holding over [650,000 BTC as of late 2025](https://saylortracker.com/) and becoming a de facto Bitcoin proxy in the stock market. Their playbook — issue debt to buy Bitcoin — spawned imitators worldwide. You can track their holdings in real-time on [Saylor Tracker](https://saylortracker.com/). - **El Salvador** made history as the first country to adopt Bitcoin as legal tender in 2021. However, the experiment proved controversial — in 2024, as part of an agreement with the [IMF](https://www.imf.org/), the country partially scaled back its Bitcoin involvement, and by 2025, Bitcoin was [rescinded as legal tender](https://en.wikipedia.org/wiki/Bitcoin_in_El_Salvador). Research showed it was rarely used by the public, though Bukele's government remains Bitcoin-friendly. - **Banks** like JPMorgan, Goldman Sachs, and Morgan Stanley began offering crypto products to wealth management clients. - **Sovereign wealth funds** in the Middle East and Asia started making quiet allocations. The narrative shifted from "should institutions own crypto?" to "how much crypto should institutions own?" That's a fundamental change. ## Memecoins: The Controversial Front Door We can't talk about 2024-2025 without addressing the elephant (or dog, or frog) in the room: **memecoins**. From Dogwifhat to BONK to an endless parade of animal-themed tokens, memecoins became the most visible — and most controversial — onboarding mechanism for new crypto users. The case for them: they're fun, they bring attention, and they teach people how wallets, DEXs, and blockchain work. More people set up their first crypto wallet for a memecoin than for any "serious" protocol. The case against them: most go to zero, many are outright scams, and they make the entire industry look unserious. The political memecoins of early 2025 were particularly divisive. My take? Memecoins are crypto's version of penny stocks — they've always existed in financial markets, and they always will. The key is to make sure new users who arrive for the memes stick around for the actual technology. That's on us as a community. ## What's Coming Next Looking ahead to the rest of 2026 and beyond, here's what I'm watching: - **Chain abstraction** — The user shouldn't need to know (or care) which blockchain they're on. Projects like Particle Network and NEAR's chain signatures are making multi-chain interaction seamless. Imagine using apps that run on five different blockchains without ever noticing. - **AI agents on-chain** — Autonomous AI agents that can hold wallets, execute trades, and interact with DeFi protocols. We're in the very early innings here, but the intersection of AI and crypto is generating real experimentation. - **Real-world adoption acceleration** — Stablecoins for remittances in developing countries. Tokenized treasuries and real estate. Supply chain verification. The boring, world-changing stuff that doesn't make headlines but moves the needle. - **More ETFs** — Solana, XRP, and other altcoin ETFs are in the pipeline. Each approval widens the funnel. ## The Big Picture If crypto's first decade (2009-2019) was about proving the technology worked, and its second phase (2019-2024) was about building the infrastructure, then this current era is about **integration**. Crypto is weaving itself into the existing financial system — not replacing it, not fighting it, but merging with it. That's less romantic than the cypherpunk origins. But it's how transformative technologies actually win. The internet didn't replace phone companies — it absorbed them. Crypto is doing the same to finance. Whether you're just getting started or you've been here since the Bitcoin whitepaper, we're living through the transition from "crypto is interesting" to "crypto is normal." And that — more than any price target — is the real milestone. --- **What's Next:** You've seen where crypto is today. In [Part 20](/blog/crypto-unlocked-20-getting-started-safely), we'll get practical — how to actually get started safely, set up your first wallet, buy your first crypto, and avoid the most common mistakes that cost beginners money. No theory. Just a step-by-step guide to going from zero to your first transaction.
← [Previous: Real-World Applications](/blog/crypto-unlocked-18-real-world-applications) · [Series Index](/blog/series/crypto-unlocked) · [Next: Getting Started Safely](/blog/crypto-unlocked-20-getting-started-safely) →
--- --- # Crypto Unlocked Part 20: Getting Started Safely URL: /blog/crypto-unlocked-20-getting-started-safely Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Security, Getting Started, Wallet, Beginners Series: Crypto Unlocked (Part 20 of 21) --- Your step-by-step guide to buying your first crypto, setting up a wallet, making your first swap, and not getting scammed along the way. You've read eighteen parts of this series. You understand blockchains, DeFi, NFTs, DAOs, Layer 2s, and the regulatory landscape. You know more about crypto than 95% of the population. And yet — maybe you still haven't actually *done* anything with it. No judgment. The gap between understanding crypto and actually using it is wider than people think. There's real money involved, the interfaces can be intimidating, and the fear of screwing up keeps a lot of smart people on the sidelines. Today we fix that. This is your practical, step-by-step guide to going from zero to "I own crypto, I control it, and I know how to use it." We'll also cover the security practices that will keep you safe — because the biggest risk in crypto isn't volatility. It's you making a preventable mistake. Let's go. ![Your journey from exchange to self-custody wallet](/assets/blog/crypto-unlocked-20/wallet-setup-journey.png) ## Step 1: Buy Your First Crypto on a Centralized Exchange The easiest on-ramp is a centralized exchange (CEX). Think of it like a brokerage account, but for crypto. **For beginners, I recommend:** - **[Coinbase](https://www.coinbase.com)** — cleanest interface, beginner-friendly, available in most countries. Fees are slightly higher, but simplicity has value when you're starting out. - **[Kraken](https://www.kraken.com)** — solid alternative, lower fees, great reputation. Interface is a touch more complex. **What to do:** 1. Create an account on Coinbase or Kraken 2. Complete identity verification (KYC) — yes, you need to upload your ID. This is legally required. 3. Link your bank account or debit card 4. Buy a small amount of ETH (Ethereum) — start with something you're comfortable losing. $50-$100 is fine. > **Why ETH first?** It's the most useful crypto to hold for actually *doing* things. You'll need it for gas fees (transaction costs) on Ethereum and many Layer 2 networks. Bitcoin is great as an investment, but ETH is the key to DeFi, NFTs, and the broader ecosystem. That's it. You now own crypto. But it's sitting on the exchange — which means *they* control it. Let's fix that. ## Step 2: Set Up a Self-Custody Wallet A self-custody wallet means you hold your own keys. No exchange can freeze your account, go bankrupt, or get hacked and lose your funds. Your crypto, your responsibility. **For Ethereum and EVM chains (Arbitrum, Optimism, Base, Polygon):** - **[MetaMask](https://metamask.io)** — the default. Browser extension + mobile app. Trusted by millions with over 5 billion transactions processed. Install it from [metamask.io](https://metamask.io) (and *only* from there — more on scams later). **For Solana (and beyond):** - **[Phantom](https://phantom.com)** — originally the go-to Solana wallet, Phantom now supports multiple chains including Ethereum, Bitcoin, Base, and Sui — all in one wallet. Clean, fast, easy. Get it from [phantom.com](https://phantom.com). **Setting up MetaMask:** 1. Install the browser extension (Chrome, Firefox, or Brave) 2. Click "Create a new wallet" 3. Set a strong password 4. **Write down your seed phrase** — this is 12 words that can recover your wallet. Write it on paper. Not in a notes app. Not in a screenshot. Paper. Store it somewhere safe. 5. Confirm the seed phrase 6. Done — you have a wallet address (starts with `0x`) > **Your seed phrase IS your wallet.** Anyone who has those 12 words has your money. Never share it. No legitimate service will ever ask for it. If someone asks for your seed phrase, they are trying to rob you. Full stop. ## Step 3: Transfer From CEX to Your Wallet Now let's move your ETH from the exchange to your wallet. 1. Open MetaMask and copy your wallet address (click on your address at the top) 2. Go to your exchange → Withdraw → Select ETH 3. Paste your MetaMask address as the destination 4. **Start with a small test transaction** — send $5-$10 first. Wait for it to arrive. Then send the rest. > **The test transaction habit will save you one day.** Crypto transactions are irreversible. If you send to a wrong address, that money is gone. Always test first, especially with new addresses. The small fee you pay twice is insurance against losing everything. The transfer usually takes 1-5 minutes on Ethereum. You'll see it appear in MetaMask once confirmed. Congratulations — you now have crypto in a wallet you control. Nobody can freeze it. Nobody can take it without your keys. ## Step 4: Your First Swap on a DEX Let's use your crypto on a decentralized exchange (DEX). We'll swap some ETH for another token. **Using Uniswap:** 1. Go to [app.uniswap.org](https://app.uniswap.org) 2. Click "Connect Wallet" → select MetaMask → approve the connection 3. In the swap interface, the top token should be ETH 4. In the bottom field, search for a token — let's say USDC (a stablecoin pegged to $1) 5. Enter a small amount of ETH to swap 6. Click "Swap" → review the details → confirm in MetaMask 7. Wait for the transaction to confirm (usually under a minute) You just used DeFi. No intermediary. No account. No permission needed. Just you and a smart contract. > **Watch the gas fees.** Ethereum mainnet gas can be expensive during busy periods. If fees seem unreasonable ($20+ for a simple swap), consider using a Layer 2 like Arbitrum or Base — same experience, fraction of the cost. You can bridge ETH to L2s directly from most exchanges now. ## Step 5: Explore Your New World You're on-chain now. Time to explore: - **[DeBank](https://debank.com)** — paste your wallet address to see your full portfolio across all chains. It's like a crypto dashboard for your entire on-chain life. - **[Etherscan](https://etherscan.io)** — paste your address to see every transaction you've made on Ethereum. This is the blockchain explorer — everything is public and transparent. Other chains have their own explorers (e.g., [Solscan](https://solscan.io) for Solana, [Arbiscan](https://arbiscan.io) for Arbitrum). - **[Zapper](https://zapper.xyz)** — another great portfolio tracker with a clean interface Poke around. Look at your swap transaction on Etherscan. See the gas fee, the contract you interacted with, the exact amounts. This is the transparency we talked about in earlier parts of this series — you're seeing it firsthand now. ## The Security Checklist Now that you're up and running, let's lock things down. [code block] **Non-negotiable security basics:** - **Hardware wallet for serious money** — if you're holding more than you'd carry in your physical wallet, get a [Ledger](https://www.ledger.com) or [Trezor](https://trezor.io). These keep your private keys offline, making them nearly impossible to hack remotely. ~$70-$150. - **2FA on everything** — enable two-factor authentication on every exchange, every email, every account. Use an authenticator app (Google Authenticator, Authy), **not SMS** — SIM swap attacks are real. - **Separate email for crypto** — create a dedicated email address for your exchange accounts and crypto services. If your main email gets compromised, your crypto accounts are still isolated. - **Password manager** — unique, strong passwords for everything. [Bitwarden](https://bitwarden.com) is free and excellent. - **Bookmark your sites** — always access exchanges and DeFi apps from bookmarks, never from Google search results (scammers buy ads for fake sites). ## The Scam Landscape Here's the uncomfortable truth: crypto is full of scammers. Not because crypto is bad — because it involves money, and money attracts predators. Here's what to watch for: ![How to spot phishing: legitimate vs fake crypto sites](/assets/blog/crypto-unlocked-20/phishing-warning.png) **Phishing sites** — fake versions of real websites. `app.uniswap.org` vs `app-uniswap.org` vs `uniswap-app.com`. One is real. The others will drain your wallet. Always check the URL. Always use bookmarks. **Fake airdrops** — random tokens appearing in your wallet that you didn't buy. Don't interact with them. Don't try to sell them. Some contain malicious smart contracts that drain your wallet when you approve the transaction. **Discord and Telegram DMs** — anyone DMing you about crypto is trying to scam you. "Support" staff, "moderators," people offering to help with your problem. Legitimate projects never DM you first. Turn off DMs in crypto Discord servers. **Twitter/X impersonators** — fake accounts mimicking real projects or influencers, posting "send 1 ETH, get 2 back" or linking to malicious sites. Elon Musk is not doubling your Bitcoin. Vitalik is not giving away ETH. **"Pig butchering" romance scams** — long-con scams where someone builds a relationship with you (often on dating apps or social media), then gradually convinces you to "invest" in a fake crypto platform. These are sophisticated and emotionally devastating. If someone you've never met in person is giving you crypto investment advice, it's a scam. The [FBI's IC3](https://www.ic3.gov) tracks these — they accounted for billions in losses in recent years. **Approval exploits** — when you use a DEX, you often approve it to spend your tokens. Malicious contracts can ask for unlimited approval, then drain your wallet later. Always check what you're approving. Note: even a hardware wallet won't protect you here — approvals don't require your private key to be stolen, they use permissions you already granted. ## Revoke Approvals: Clean Up After Yourself Every time you approve a smart contract to spend your tokens, that permission stays active until you revoke it. Over time, you accumulate approvals — and any one of them could be exploited if that contract gets compromised. **Use [revoke.cash](https://revoke.cash):** 1. Connect your wallet 2. See all your active token approvals across over 100 supported networks 3. Revoke any you don't actively need Make this a monthly habit. It costs a small gas fee per revocation, but it's worth it. Think of it as changing your passwords regularly, but for smart contract permissions. > **Pro tip:** Install the [Revoke.cash browser extension](https://revoke.cash/extension) — it warns you *before* you sign a potentially harmful approval, acting as a real-time safety net against phishing sites. Prevention beats cleanup. Also worth noting: disconnecting your wallet from a dApp is **not** the same as revoking approvals. Disconnecting only removes the site's ability to see your address — the spending permissions remain active until explicitly revoked. ## Operational Security (OpSec) Beyond the technical stuff, how you *behave* matters: - **Don't tell people how much crypto you have.** Not on Twitter, not to friends, not to family members who "just want to know." You become a target the moment someone knows you hold significant crypto. The $5 wrench attack is real — why hack a wallet when you can threaten the owner? - **Use a VPN** when accessing crypto services on public WiFi. NordVPN, Mullvad, or ProtonVPN are all solid choices. - **Browser hygiene** — use a dedicated browser (or browser profile) for crypto. Fewer extensions means fewer attack vectors. Malicious browser extensions have drained wallets. - **Never sign transactions you don't understand.** If a website asks you to sign something and you're not sure what it does, close the tab. ## Tax Basics (Yes, Really) I know this isn't the fun part, but ignoring it won't make it go away. **In most countries, crypto gains are taxable.** The specifics vary, but the general principle is universal: if you buy crypto, it goes up in value, and you sell or swap it — you owe taxes on the gain. - **Every swap is a taxable event** in most jurisdictions. Swapping ETH for USDC? That's a sale of ETH. - **Keep records from day one.** Export your transaction history from exchanges. Use tools like [Koinly](https://koinly.io), [CoinTracker](https://www.cointracker.io), or [TokenTax](https://tokentax.co) to track everything automatically. - **Don't wait until tax season** to figure this out. Retroactively reconstructing a year of DeFi transactions across five chains is a nightmare I wouldn't wish on anyone. > **Tip:** Connect your wallets and exchange accounts to a crypto tax tool *now*, while you have a clean start. Future you will be grateful. ## Your Starting Budget Let's talk money. **Only invest what you can afford to lose completely.** This isn't a polite suggestion — it's the most important rule in crypto. The market can drop 80% and stay down for years. If that would ruin you financially, you're investing too much. **My recommendation for beginners:** - Start with an amount that's meaningful enough to care about, but small enough that losing it wouldn't change your life. For most people, that's somewhere between $100-$500. - **Dollar-cost average (DCA) over lump sum.** Instead of putting $500 in at once, put $50 in every week for ten weeks. You'll get an average price instead of gambling on timing. DCA removes the emotional "should I buy now or wait?" anxiety. - Don't chase pumps. Don't buy something because it went up 50% yesterday. Don't FOMO. The market will still be here tomorrow. ## What's Next You've done it. You own crypto, you control it, you've used DeFi, and you know how to stay safe doing it. That's a massive step. But owning crypto is one thing — having a *strategy* is another. In our final installment, **[Part 21: Building Your Strategy](/blog/crypto-unlocked-21-building-your-strategy)**, we'll put everything from this series together. We'll talk about portfolio construction, risk management, how to evaluate projects, when to take profits, and how to think about crypto as part of your broader financial picture. You've got the tools. Now let's build the plan.
← [Previous: The 2025-2026 Landscape](/blog/crypto-unlocked-19-2025-2026-landscape) · [Series Index](/blog/series/crypto-unlocked) · [Next: Building Your Strategy](/blog/crypto-unlocked-21-building-your-strategy) →
--- --- # Crypto Unlocked Part 21: Building Your Strategy URL: /blog/crypto-unlocked-21-building-your-strategy Published: 2026-01-27 Author: Jo Vinkenroye Tags: Crypto, Strategy, Portfolio, Research, Beginners Series: Crypto Unlocked (Part 21 of 21) --- The final chapter. DCA, portfolio construction, on-chain research, and how to stay informed without losing your mind. You're ready. You made it. Twenty chapters. From "what even is a blockchain?" all the way to DeFi protocols, NFTs, Layer 2s, DAOs, and security best practices. If you've followed along — even loosely — you now understand more about crypto than the vast majority of people on the planet. That's not hyperbole. Most people still think Bitcoin is a company. But knowledge without action is just trivia. This final chapter is about turning everything you've learned into something practical: **your strategy**. Not my strategy. Not some influencer's strategy. Yours. Let's build it. ## The Right Mindset Before we talk tactics, let's talk psychology — because crypto will test yours. This market is volatile. Not "tech stocks had a bad quarter" volatile. More like "your portfolio dropped 40% on a Tuesday because someone tweeted a meme" volatile. If you're not mentally prepared for that, no strategy will save you. Here's the mindset shift that separates survivors from casualties: - **Think in years, not days.** Zoom out. Bitcoin has "crashed" dozens of times and still trends up over any 4-year window in its history. - **Volatility is the price of admission.** You don't get 10x returns without stomach-churning drawdowns. That's the deal. - **Have a plan before you need one.** Decide now what you'll do when the market drops 50%. If the answer is "panic sell," you're overexposed. > 💡 Write your plan down. Literally. "If BTC drops below X, I will ___. If my portfolio is up 3x, I will ___." Future-you will thank present-you when emotions are running high. ## Dollar Cost Averaging: The Boring Strategy That Wins You've heard of DCA by now. It's simple: invest a fixed amount at regular intervals, regardless of price. $100 every week. $500 every month. Whatever fits your budget. Why does this work? - **It removes emotion.** You're not trying to time the bottom or chase the top. You just buy. - **It smooths out volatility.** You buy more when prices are low, less when they're high — automatically. - **It's sustainable.** You can DCA for years without burning out or blowing up. Most major exchanges let you set up recurring buys. Do it once, forget about it, and let compounding do its thing. Tools like [dcaBTC](https://dcabtc.com) let you backtest DCA strategies on Bitcoin with real historical data — worth playing with before you commit to a schedule. > 💡 The best time to start DCA was yesterday. The second best time is today. Seriously — stop waiting for "the right moment." There is no right moment. There's only consistent action over time. Does DCA beat lump-sum investing? [Research from Vanguard](https://investor.vanguard.com/investor-resources-education/online-trading/dollar-cost-averaging-vs-lump-sum) shows that lump-sum investing outperforms DCA roughly two-thirds of the time in traditional markets — but DCA wins on **sleep quality**. And in a market as volatile as crypto, where drawdowns of 50-80% are routine, that risk-adjusted peace of mind matters more than you think. ![DCA vs lump sum investing comparison — DCA provides a smoother ride through crypto volatility](/assets/blog/crypto-unlocked-20-dca-vs-lump-sum.jpg) ## Building Your Portfolio There's no one-size-fits-all portfolio, but here's a framework that works for most people starting out: ### The Core-Satellite Approach [code block] - **Core (60-80%):** Bitcoin and Ethereum. These are your blue chips. They've survived multiple cycles, have the strongest network effects, and carry the least relative risk in crypto. We covered why in [Part 2](/blog/crypto-unlocked-02-bitcoin-digital-gold) and [Part 3](/blog/crypto-unlocked-03-wallets-keys-self-custody). - **Satellites (10-30%):** Altcoins you believe in after doing real research. Maybe a Layer 2 you use daily (remember [Part 13](/blog/crypto-unlocked-13-spot-dexs)?), a DeFi protocol with real revenue, or an infrastructure play. Use [Token Terminal](https://tokenterminal.com) to compare protocol revenues before allocating — treat it like evaluating a business. - **Stablecoin Reserve (5-15%):** Dry powder. Keep some USDC or DAI on the side so you can buy dips without selling existing positions. This also doubles as your "sleep at night" fund. ### Position Sizing: The 1-5% Rule No single altcoin should be more than 5% of your portfolio. Ideally, keep speculative bets at 1-2%. Why? Because altcoins can — and do — go to zero. If you put 30% of your portfolio into a single token and it collapses, you're not just down money. You're down *psychologically*, and that leads to worse decisions. > 💡 Never invest more than you can afford to lose completely. If losing this money would affect your rent, your groceries, or your relationships — you're overexposed. Scale back. Crypto will still be here when you're in a better position. ## Doing Your Own Research (For Real This Time) "DYOR" is the most repeated and least followed advice in crypto. Let's make it actionable. ### On-Chain Research Tools The beauty of crypto is that everything is transparent. You can verify claims with data instead of trust: - **[DefiLlama](https://defillama.com)** — Track TVL (Total Value Locked) across every DeFi protocol. If a project claims massive adoption but has $2M in TVL, that's a red flag. - **[Dune Analytics](https://dune.com)** — Community-built dashboards for any on-chain metric you can imagine. User growth, transaction volume, revenue — it's all there. - **[Token Terminal](https://tokenterminal.com)** — Financial metrics for crypto protocols. Revenue, P/E ratios, earnings — treating crypto projects like businesses. - **[Nansen](https://www.nansen.ai)** — Wallet analytics and smart money tracking. See what top wallets are doing, track whale movements, and identify trends early. Now also offers integrated trading on Solana and Base. - **[Arkham Intelligence](https://platform.arkhamintelligence.com)** — On-chain investigation tool. Links wallets to real-world entities through its searchable database. Great for understanding who's behind large movements and tracking specific addresses. ### Fundamental Analysis: What Makes a Good Project? When evaluating any crypto project, ask: - **Team:** Who's building this? Are they public, experienced, and accountable? Anonymous teams aren't automatically bad, but they're higher risk. - **TVL & Usage:** Are people actually using this protocol, or is it a ghost town with a shiny website? - **Revenue:** Does the protocol generate real fees? Where does the money come from? Protocols with sustainable revenue models survive bear markets. - **Tokenomics:** What's the supply schedule? Are there massive unlocks coming that will dump the price? Is the token actually needed for the protocol to function, or is it just a speculative vehicle? (We dug into this in [Part 6](/blog/crypto-unlocked-06-multi-chain-world).) - **Community:** Is there a real community of users and builders, or just a Telegram full of bots and price speculation? ### Reading Smart Contracts (The Basics) You don't need to be a Solidity developer, but you should know how to do basic verification: 1. Go to the contract address on [Etherscan](https://etherscan.io) 2. Check if the contract is **verified** — this means the source code is public and matches the deployed bytecode 3. Look for the "Read Contract" tab to see key parameters: owner, fees, permissions 4. Check if there's a proxy pattern (upgradeable contracts) — this means the team can change the contract logic later If a contract isn't verified, that's a yellow flag. If someone asks you to interact with an unverified contract, think twice. We covered wallet security in [Part 9](/blog/crypto-unlocked-09-defi-fundamentals) — those principles apply here too. ## Staying Informed Without Losing Your Mind Crypto moves fast. New protocols launch daily. Market narratives shift weekly. It's easy to feel like you're drowning in information. Here's how to stay current without becoming a full-time crypto analyst: ### The Signal Sources - **Crypto Twitter (CT):** Still the fastest source of crypto news. Follow builders, not influencers. Look for people who share analysis, not price predictions. - **Newsletters:** [Bankless](https://www.bankless.com) for DeFi and Ethereum ecosystem deep dives — their daily brief distills crypto news into 3 minutes. [The Defiant](https://thedefiant.io) for independent news coverage and analysis. Both are excellent signal-to-noise. - **Podcasts:** [Bankless podcast](https://www.bankless.com), [Unchained](https://unchainedcrypto.com), and [Bell Curve](https://www.youtube.com/@bellaboratory) for longer-form analysis when you're commuting or working out. - **Protocol-specific:** Follow the governance forums and Discord of projects you're invested in. That's where you'll learn about upcoming changes before they hit the news. ### Filtering the Noise The crypto information firehose will drown you if you let it. Some rules: - **Unfollow anyone who only posts price predictions.** They're guessing, just like everyone else. - **Ignore "guaranteed" returns.** Nothing is guaranteed. If it sounds too good to be true, re-read [Part 19](/blog/crypto-unlocked-19-2025-2026-landscape) on scams. - **Set time limits.** Check crypto news/Twitter once or twice a day, not every 15 minutes. Your portfolio doesn't need hourly babysitting. - **If you're anxious, you're overexposed.** This is a reliable signal. Reduce your position until you can sleep. > 💡 The best investors are often bored. If crypto is exciting every single day, you're probably overtrading. ## The Mistakes That Kill Portfolios I've watched smart people lose fortunes to these patterns. Don't be one of them: - **FOMO buying:** The token is up 200% this week, everyone's talking about it, and you throw money in at the top. By the time you hear about it on social media, the easy gains are gone. - **Panic selling:** The market crashes, you sell everything at the bottom, then watch it recover over the next six months. This is the mirror image of FOMO — same emotional driver, same result. - **Overtrading:** Every trade has fees. Every swap has slippage. Every transaction is a taxable event in most jurisdictions. The more you trade, the more you leak value. - **Ignoring fees and taxes:** Speaking of which — gas fees on L1, trading fees on exchanges, and tax obligations are all real costs. Factor them in. (This is another reason L2s from [Part 13](/blog/crypto-unlocked-13-spot-dexs) matter so much.) - **Going all-in on one token:** Diversification isn't just a TradFi concept. It's survival in crypto. - **Skipping security:** Using the same password everywhere, keeping everything on a centralized exchange, clicking random links. One mistake can undo years of gains. ## The Long View: Crypto Cycles Crypto moves in roughly 4-year cycles, loosely correlated with Bitcoin halvings (which we covered in [Part 2](/blog/crypto-unlocked-02-bitcoin-digital-gold)). Bitcoin's halvings occurred in 2012, 2016, 2020, and most recently April 2024 — each followed by significant bull runs within 12-18 months: [code block] 1. **Accumulation:** Prices are flat, sentiment is terrible, builders are building. On-chain metrics like active addresses and developer activity (trackable via [Dune dashboards](https://dune.com/browse/dashboards)) stay steady even as prices stagnate. 2. **Bull run:** New narratives emerge, prices explode, everyone's a genius. 3. **Euphoria/blow-off top:** Your taxi driver is asking you about altcoins. This is the top. TVL on [DefiLlama](https://defillama.com) spikes as leverage and speculation flood in. 4. **Bear market/crash:** Prices collapse 70-90%. Projects die. The cycle resets. **Bear markets are building seasons.** The best projects of every cycle were built during the previous bear market. Ethereum was conceived in 2013 and launched in 2015 during Bitcoin's bear market. DeFi Summer (2020) was built on protocols developed during the 2018-2019 bear. Solana's ecosystem exploded in 2023-2024 after being built through the post-FTX collapse winter. If you're paying attention during the quiet times, you'll be positioned for the loud ones. The investors who win long-term are the ones who survive the drawdowns and keep accumulating when everyone else has given up. ## What You've Learned: The 20-Chapter Journey Let's zoom out on what you now know: - **The fundamentals:** Blockchain, Bitcoin, Ethereum, how consensus works, why decentralization matters - **The ecosystem:** Altcoins, stablecoins, tokenomics, exchanges, wallets - **DeFi:** Lending, borrowing, DEXs, yield farming, liquidity pools - **NFTs and DAOs:** Digital ownership, community governance, new organizational models - **Scaling:** Layer 2s, rollups, the modular blockchain thesis - **Security:** Wallet hygiene, scam recognition, self-custody best practices - **The big picture:** Regulation, real-world adoption, where this is all heading - **Strategy:** Portfolio construction, risk management, research methodology That's not a surface-level overview. That's a foundation you can build on for years. ## Where to Go From Here This series gave you the map. Now you walk the terrain: - **Start small.** Set up a DCA. Buy some ETH. Use a DeFi protocol with real money (a small amount). Experience is the best teacher. - **Go deeper on what interests you.** Loved the DeFi chapters? Dive into yield strategies. Fascinated by L2s? Start using them daily. Intrigued by DAOs? Join one. - **Build.** If you're technical, start learning Solidity or contributing to open-source protocols. If you're not, contribute to communities, write about what you've learned, help onboard others. - **Stay patient.** The biggest gains in crypto go to those who can sit still for years. Not the ones refreshing [CoinGecko](https://www.coingecko.com) every five minutes. - **Bookmark your toolkit.** Keep [DefiLlama](https://defillama.com), [Dune](https://dune.com), [Token Terminal](https://tokenterminal.com), and [Nansen](https://www.nansen.ai) in your browser. These are your research command center. ## Graduation Day After completing this series, **you now know more than 95% of people talking about crypto on the internet.** Most of the noise comes from people who bought a token, don't understand what it does, and are hoping it goes up. You're different. You understand the technology. You know how to evaluate projects. You know how to protect yourself. You have a framework for building a portfolio that can survive the chaos. That doesn't mean you won't make mistakes — you will. Everyone does. But you'll make *informed* mistakes, learn from them faster, and recover more gracefully. Crypto is still early. The infrastructure is still being built. The regulations are still being written. The killer apps are still being invented. You're not late — you're just in time to understand what's actually happening instead of chasing hype. So go build your strategy. Start your DCA. Set up your wallet properly. Bookmark DefiLlama. Unfollow the charlatans. And most importantly — enjoy the ride. It's been a pleasure writing this series. Now get out there. Welcome to crypto. You're ready. 🎓
← [Previous: Getting Started Safely](/blog/crypto-unlocked-20-getting-started-safely) · [Series Index](/blog/series/crypto-unlocked)
--- --- # Hardening Your Clawdbot Server: A Complete Security Guide URL: /blog/server-security-fail2ban-ufw Published: 2026-01-27 Author: Jo V Tags: Clawdbot, Security, Linux, DevOps, Server, SSH, endlessh --- within 60 seconds of setting up fail2ban, it caught an active brute-force attack. here's how to lock down your clawdbot server properly. I set up fail2ban on my Clawdbot server tonight. Within 60 seconds, it banned its first attacker. Not a test. An actual bot was actively trying to brute-force SSH while I was configuring the firewall. [code block] Every 30 seconds. Different usernames. Automated. Relentless. Then fail2ban kicked in: [code block] Done. 30-day ban. If you're running Clawdbot on a VPS, you need this. ## Why Clawdbot Servers Are Targets Clawdbot servers are juicy targets because they typically have: - **Shell access** — The agent can run commands - **API credentials** — Anthropic, OpenAI, messaging platforms - **Personal data** — Session logs, memory files, contacts - **Always-on connectivity** — 24/7 uptime on cloud VPS An attacker who compromises your Clawdbot server gets access to your AI assistant's full capabilities. They could read your conversations, impersonate you on messaging platforms, or rack up API bills. Let's fix that. ## Step 1: SSH Key Authentication (Critical) This is the single most important security step. Password authentication is the weakest link. SSH keys are cryptographically secure and impossible to brute-force. **On your local machine**, generate a key pair: [code block] This creates two files: - `~/.ssh/id_ed25519` — Your private key (never share this) - `~/.ssh/id_ed25519.pub` — Your public key (goes on the server) **Copy your public key to the server:** [code block] Or manually: [code block] **Test it works:** [code block] If you get in without a password prompt, it's working. ## Step 2: Disable Password Authentication Once SSH keys work, disable password authentication entirely: [code block] Find and change these lines: [code block] Restart SSH: [code block] Now the only way in is with your private key. Brute-force attacks become completely pointless — they're trying to guess a password that doesn't exist. ## Step 3: Install fail2ban Even with SSH keys, we want fail2ban as a backup layer to keep logs clean and stop wasted resources. [code block] Create a local config: [code block] What this does: - **maxretry = 3** — Three failed attempts = banned - **bantime = 2592000** — 30-day ban (if you use SSH keys, any failed attempt is definitely an attacker) - **findtime = 600** — Attempts counted within 10-minute window Start it: [code block] Check if it's working: [code block] ## Step 4: Configure ufw Firewall [code block] Verify: [code block] ## Step 5: SSH Tarpit with endlessh (The Fun Part) Here's where it gets interesting. The idea: move real SSH to a non-standard port, put endlessh on port 22. Bots connect to port 22 and get trapped in an infinitely slow banner that sends one random byte every 10 seconds. They never get a login prompt. Meanwhile, real SSH runs unbothered on a different port. ### What endlessh Does endlessh is an SSH tarpit. When a bot connects, it sends an SSH banner **infinitely slowly** — one random byte every 10 seconds. The SSH spec allows banners up to 255 characters, but it doesn't specify a minimum speed. Most bots will wait patiently for the full banner, consuming a connection slot and wasting their time. It's like digital quicksand for SSH bots. ### Install endlessh [code block] ### Move Real SSH to Port 2222 On modern Ubuntu, SSH is managed by systemd sockets, so we need to override the socket config: [code block] Reload and restart: [code block] Verify SSH is now on 2222: [code block] ### Configure endlessh for Port 22 [code block] This configures: - **Port 22** — The standard SSH port (bots will find this) - **Delay 10000** — Send one byte every 10 seconds - **MaxClients 4096** — Handle lots of trapped bots - **LogLevel 1** — Log connections for entertainment ### Fix the systemd Service endlessh might fail with a NAMESPACE error on modern systems. Create a service override: [code block] ### Start endlessh [code block] Check it's working: [code block] ### Update ufw for New SSH Port [code block] ### Test the Tarpit From another machine, try connecting to port 22: [code block] It should just hang there, sending one character every 10 seconds. After a while you'll see a super slow, garbled banner. Press Ctrl+C to escape. Now try the real SSH: [code block] Instant connection with your SSH key. ### The Results Bots waste their time on port 22 while your real SSH runs unbothered on 2222. Your logs stay clean because failed attempts hit the tarpit, not your real SSH service. Watching `journalctl -f -u endlessh` is surprisingly entertaining — bots connecting and just... waiting. ### How Effective Is It Really? [One researcher](https://github.com/bediger4000/ssh-tarpit-behavior) ran endlessh for months and collected data on trapped bots. The results are hilarious: - One bot held **416 concurrent connections** open at the same time — just kept opening new ones without closing the old - A single IP stayed connected for **690,172 seconds** — that's **8 days straight** — downloading 1.2MB of random garbage thinking it was an SSH banner - Some connections lasted **12,000+ seconds** (3.3 hours) before the bot gave up - The median trap time was 17 seconds (the "smart" bots with timeouts), but the mean was 119 seconds — dragged up by the dumb ones that wait forever The quality of underground scanning software varies immensely. Some bots have a 15-second timeout and move on. Others have **no timeout at all** — they'll sit there until the heat death of the universe waiting for a login prompt that never comes. And the best part: endlessh uses virtually zero resources. It's a single process handling thousands of connections with minimal CPU and memory. The bots are the ones burning resources, not you. ### Monitor the Tarpit Watch bots get trapped in real time: [code block] Count current trapped connections: [code block] ## Step 6: Change SSH Port (Brief) We already moved SSH to 2222 for the tarpit, but here's the general approach if you just want to change ports without endlessh: [code block] Update firewall: [code block] Security through obscurity isn't real security, but it does reduce noise from random scans. Combined with SSH keys and fail2ban, it's effective. ## Step 7: Secrets Management with pass Don't store API keys and passwords in plaintext config files. Use [pass](https://www.passwordstore.org/) — the standard Unix password manager. **Install it:** [code block] **Set it up:** [code block] **Store secrets:** [code block] **Retrieve secrets in scripts:** [code block] **Why this matters:** - Secrets are GPG-encrypted at rest - Even if someone accesses your filesystem, they can't read the passwords without your GPG key - You can sync your password store via git (safely, since everything is encrypted) - Works great with Clawdbot — store your API keys and channel tokens securely Your `~/.password-store/` directory is encrypted. Your `~/.clawdbot/clawdbot.json` with plaintext tokens? Not so much. ## Step 8: Clawdbot Security Audit Clawdbot has a built-in security scanner. Run it: [code block] For a deeper check: [code block] To auto-fix common issues: [code block] ### What the Audit Checks - **Inbound access** — Can strangers message your bot? - **Tool blast radius** — Could prompt injection lead to shell access? - **Network exposure** — Is your Gateway exposed without auth? - **Browser control** — Is remote browser control secured? - **Disk permissions** — Are credentials and logs protected? - **Plugins** — Are untrusted extensions loaded? ### Fix Workspace Permissions The audit will likely warn about permissions. Fix them: [code block] This ensures only your user can read Clawdbot's config and credentials. ### Credential Storage Locations Know where your secrets live: - **Telegram token:** config or `channels.telegram.tokenFile` - **WhatsApp auth:** `~/.clawdbot/credentials/whatsapp/*/creds.json` - **Pairing allowlists:** `~/.clawdbot/credentials/*-allowFrom.json` - **Session logs:** `~/.clawdbot/agents/*/sessions/*.jsonl` All of these should be readable only by your user (not group/world). ## Step 9: Channel Security ### Lock Down DMs By default, Clawdbot might accept messages from anyone. Tighten it: In your config, set DM policies to `allowlist`: [code block] ### Lock Down Groups Same for groups — use allowlists instead of open: [code block] The security audit will flag open policies. ## Step 10: Report Attackers to AbuseIPDB Blocking attackers locally is good. Getting them blacklisted globally is better. [AbuseIPDB](https://www.abuseipdb.com) is a community database of abusive IPs. When you report an attacker, every other server using their blocklist benefits. It's collective defense. ### Get a Free API Key Sign up at [abuseipdb.com](https://www.abuseipdb.com/account/api) — the free tier allows 1000 reports/day. Store it securely: [code block] ### Report a Banned IP [code block] Categories `18,22` = brute-force + SSH. ### Auto-Report with Clawdbot I built a [Clawdbot skill](https://github.com/jestersimpps/clawdbot-fail2ban-reporter) that auto-reports every new ban to AbuseIPDB. If you're running Clawdbot, install it: [code block] Or grab it from GitHub: [code block] After setup, every fail2ban ban automatically reports to AbuseIPDB. Zero effort, maximum community impact. ## Step 11: Monitor Ongoing Attacks Check who's been banned: [code block] Watch live bans: [code block] Check recent SSH attempts: [code block] Watch endlessh trap bots in real-time: [code block] You'll see connections that just hang there. Each one is a bot wasting time instead of bothering your real SSH. ## What Attackers Actually Try From my server logs in just one hour: - **oracle** — Oracle DB default user - **postgres** — PostgreSQL default - **git** — GitLab/Gitea servers - **solana** — Crypto node operators (popular target) - **HwHiAiUser** — Huawei device default - **admin, root** — The classics - **ftpuser** — Legacy FTP They spray common usernames hoping something sticks. fail2ban stops them after 3 attempts, but now they hit the tarpit first. ## The Complete 5-Minute Setup Copy-paste this entire block to harden a fresh Clawdbot server: [code block] **Important:** After running this, connect via SSH on the new port: [code block] ## The Complete Security Stack After following this guide, your Clawdbot server has: - **SSH key authentication** — No passwords to brute-force - **Password auth disabled** — Even if they guess right, it won't work - **endlessh tarpit** — Bots waste time on port 22 while real SSH hides on 2222 - **fail2ban** — 3 attempts → 30-day ban (backup layer) - **ufw firewall** — Only necessary ports exposed - **pass** — Secrets GPG-encrypted at rest - **Clawdbot pairing** — Strangers can't message your bot - **Proper permissions** — Config and credentials locked down - **AbuseIPDB reporting** — Attackers get blacklisted globally That's defense in depth. Multiple layers, each one making the next attack harder. The tarpit is the cherry on top — instead of just blocking attackers, you're wasting their time and resources while keeping your real services hidden. It's not paranoia when every server with a public IP gets attacked within hours. Five minutes of setup. Sleep better at night. --- *For more Clawdbot security details, check the [official security docs](https://docs.clawd.bot/gateway/security).* --- --- # Is Quantum Computing a Real Threat to Bitcoin? Here's What Actually Matters URL: /blog/quantum-bitcoin-threat-reality-or-fud Published: 2026-01-26 Author: Jo V Tags: Bitcoin, Quantum Computing, Cryptography, Security, Crypto --- a wall street strategist just dumped bitcoin over quantum fears. let's break down what's actually at risk—and what isn't. Christopher Wood just dropped his entire Bitcoin allocation. The widely-followed Wall Street strategist at Jefferies—whose "Greed and Fear" newsletter moves markets—pulled Bitcoin from his model portfolio this week. His reason? Quantum computing. He's replacing BTC with gold. The kind you can hold. The kind that doesn't care about Shor's algorithm. This comes after Coinbase's head of research suggested **33% of Bitcoin's supply** could be vulnerable to quantum attacks. Bankless went further, saying quantum could "divide Bitcoin by zero." So is this the end? Or is it FUD? Let's break it down. ## What Quantum Actually Threatens First, some technical clarity. Bitcoin relies on two cryptographic primitives: 1. **ECDSA (Elliptic Curve Digital Signature Algorithm)** — Used to sign transactions. This proves you own the private key without revealing it. 2. **SHA-256** — Used for mining (proof of work) and creating addresses from public keys. Quantum computers threaten these differently. ### ECDSA: The Real Risk Shor's algorithm, running on a sufficiently powerful quantum computer, could theoretically derive a private key from a public key in polynomial time. Today's classical computers would take longer than the age of the universe for the same task. The key word is "theoretically." We'll get to why. Here's the attack scenario: [code block] This is called the "[transaction interception attack](https://bitcoinmagazine.com/technical/quantum-computing-and-bitcoin-security)." It's real in theory, but requires: - A quantum computer with **millions** of stable qubits - Error correction that doesn't exist yet - Sub-10-minute execution time ### SHA-256: Not Really at Risk Grover's algorithm could theoretically speed up brute-force searches, but only quadratically. For SHA-256, this means reducing security from 256 bits to 128 bits. 128-bit security is still considered unbreakable by any foreseeable technology. Your addresses are safe from quantum hash attacks. ## The "33% Vulnerable" Claim Coinbase's research noted that roughly 33% of Bitcoin's supply sits in addresses where the public key has been exposed. This includes: - **P2PK addresses** (original Satoshi-era format, public key is the address) - **Addresses that have been spent from** (spending reveals the public key) - **Lost coins** (many early addresses used P2PK) Satoshi's estimated 1 million BTC? Sitting in P2PK addresses with exposed public keys. **Here's what's NOT vulnerable:** - Modern P2PKH and P2SH addresses (public key hidden behind hash) - Addresses you've never spent from - Any address using newer formats If you're using a modern wallet and following best practices (fresh address per transaction), your Bitcoin isn't at risk until you spend it. ## The Timeline Problem Here's where the FUD falls apart: **we're not close.** [Google's Willow chip](https://blog.google/technology/research/google-willow-quantum-chip/), announced in late 2024, has 105 qubits. It made headlines for solving a specific benchmark problem in 5 minutes that would take classical computers 10 septillion years. Impressive? Sure. Relevant to Bitcoin? Not really. Breaking ECDSA-256 would require an estimated **1,500 to 4,000 logical qubits**. But logical qubits require error correction, which means you need millions of physical qubits to produce thousands of logical ones. Current estimates: - **Physical qubits:** ~1,000 today → need millions for BTC attack - **Logical qubits:** ~0 today → need 1,500-4,000 for BTC attack - **Error rates:** high today → need near-zero for BTC attack - **Estimated timeline:** 2030-2050+ for cryptographically-relevant QC Even quantum computing optimists don't expect cryptographically-relevant quantum computers before 2030. Most serious researchers say 2040-2050. ## What Bitcoin Can Do About It Bitcoin has time. And options. ### 1. Post-Quantum Cryptography [NIST finalized its first post-quantum cryptographic standards](https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards) in 2024. These include: - **CRYSTALS-Dilithium** — Digital signatures resistant to Shor's algorithm - **SPHINCS+** — Hash-based signatures (even more conservative) - **CRYSTALS-Kyber** — Key encapsulation for future use Bitcoin could soft-fork to support these new signature schemes. The community has been discussing BIP proposals for quantum resistance since 2016. ### 2. Address Format Migration A coordinated migration to quantum-resistant addresses could protect the network. This has precedent—Bitcoin has upgraded address formats multiple times (P2PKH → P2SH → P2WPKH → P2TR). ### 3. The Nuclear Option In an emergency, a hard fork could: - Freeze all P2PK addresses (controversial—includes Satoshi's coins) - Require migration to new address formats within a deadline - Implement quantum-resistant signatures immediately Nobody wants this option. But it exists. ## The Incentive Argument Here's something the doomers miss: **quantum computer operators have stronger incentives to mine Bitcoin than to attack it.** A quantum computer capable of breaking ECDSA could also find SHA-256 hashes faster (via Grover's algorithm). The first entity with such a computer could: 1. **Attack Bitcoin** — Steal some coins, destroy confidence in the network, crash the price of everything you stole 2. **Mine Bitcoin** — Earn consistent, legitimate block rewards while the network functions normally Option 2 is obviously more profitable. You'd be killing the golden goose for a one-time meal. ## My Take Christopher Wood is a smart guy. He's also managing other people's money and needs to account for tail risks. For institutional allocators with fiduciary duties, "quantum might break Bitcoin someday" is a reasonable concern to flag. It's conservative risk management. For individual holders? I think it's overblown. The timeline is long. The solutions exist. The incentives work in Bitcoin's favor. And the 33% "vulnerable" supply includes coins that are probably lost forever anyway (including Satoshi's stash, which moving would be a bigger story than quantum computing). **What I'm watching:** - Progress on quantum error correction (the real bottleneck) - Bitcoin BIPs proposing quantum-resistant upgrades - NIST post-quantum standard adoption in other protocols If you see a 10,000-qubit quantum computer with stable error correction, then start worrying. Until then, the bigger risk to your Bitcoin is forgetting your seed phrase. --- ## Sources & Further Reading - [Google's Willow Quantum Chip Announcement](https://blog.google/technology/research/google-willow-quantum-chip/) — The 105-qubit chip that sparked recent headlines - [NIST Post-Quantum Cryptography Standards](https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards) — The official quantum-resistant encryption standards - [Bitcoin Magazine: Quantum Computing and Bitcoin Security](https://bitcoinmagazine.com/technical/quantum-computing-and-bitcoin-security) — Technical deep-dive on attack vectors - [RAND: When Quantum Computers Break Encryption](https://www.rand.org/pubs/commentary/2023/09/when-a-quantum-computer-is-able-to-break-our-encryption.html) — Timeline estimates from security researchers --- *This isn't financial or cryptographic advice. I'm a developer, not a quantum physicist. Do your own research—but maybe don't sell your Bitcoin because of a threat that's decades away.* --- --- # Silver Breaks $100: The Trade of a Generation URL: /blog/silver-breaks-100-the-trade-of-a-generation Published: 2026-01-26 Author: Jo V Tags: Silver, Gold, Precious Metals, Investing, Macro --- Silver just hit $107. One man bet $1 billion on this moment. Here's why the precious metals rally might just be getting started. Silver just broke $107 per ounce. If you're not paying attention to precious metals right now, you should be. This isn't a blip. This is a 3x move from where silver sat just 18 months ago. ## The Numbers Let's ground this: - **Silver:** $107.52 (+4.11% today alone) - **Gold:** Approaching $5,000/oz - **Silver's 2024 low:** ~$22/oz - **Gain from low:** ~390% The RSI is screaming overbought. The chart has gone parabolic. Every technical indicator says "pullback imminent." And yet. ## The Man Who Bet $1 Billion In early 2025, [David Bateman](https://x.com/davidbateman)—founder of Entrata, the property management software company—revealed he had purchased "close to a billion dollars in precious metals over the past six months." To be exact: **12.69 million ounces of physical silver**. That's 1.5% of the *entire annual global supply*. When silver broke $100 this week, [he posted](https://x.com/davidbateman/status/2014752284379644294): > "Congrats everyone on $100 silver. Couldn't have happened to a better group of degenerate mildly autistic misfits." His cost basis wasn't disclosed, but with silver in the low $30s when he was accumulating, that's easily a **250%+ return** on a billion-dollar position. ## The Thesis Why would someone bet a billion dollars on shiny metal? Bateman laid out his reasoning publicly: 1. **The global monetary system is collapsing.** What some call "The Great Reset" or Basel Endgame. 2. **The biggest credit bubble in history is popping.** $300 trillion in global debt. 3. **US debt refinancing is impossible without massive printing.** $28 trillion in treasuries maturing in the next 4 years. 4. **Trump tariffs are accelerating the timeline.** By design, he argues. 5. **Physical possession is everything.** No counterparty risk. His most memorable line: *"The whole world right now is a sophisticated game of musical chairs; the chairs are precious metals."* ## The Central Bank Bid This isn't just retail speculation. According to Goldman Sachs' Rick Privorotsky, the dominant driver is structural: > "There is clearly hot money involved, but first and foremost gold is a central bank trade… a slow erosion of the dollar's exorbitant privilege rather than a sudden loss of confidence." Central banks—particularly China, Russia, and emerging markets—have been accumulating gold at unprecedented rates. They're diversifying away from dollar-denominated assets. Silver, being both a monetary and industrial metal, rides the same wave. ## The Buffett Precedent Warren Buffett knows something about silver. In the late 1990s, Berkshire Hathaway accumulated **129.7 million ounces**—about 4,000 metric tons—of physical silver. They held through the Dot Com crash and sold around 2006 for a substantial profit. Buffett's thesis then was supply/demand imbalance. Silver was being consumed industrially faster than it was being mined. The same thesis applies today, amplified by: - **Solar panel demand** (silver is critical for photovoltaic cells) - **EV battery technology** - **5G infrastructure** - **AI data center buildout** Industrial demand is structurally higher than it's ever been. Mine supply isn't keeping up. ## The Bear Case Let's be honest about the risks: **Technicals are stretched.** RSI at nosebleed levels. Parabolic charts historically mean painful corrections. **Tariff policy is uncertain.** The recent news that the US held off on critical mineral tariffs caused a pullback. Policy shifts can move metals violently. **Dollar strength.** If the Fed stays hawkish longer than expected, dollar strength could pressure metals. **Profit-taking.** After a 3x move, some holders will want to lock in gains. The question is whether new buyers absorb the selling. ## What I'm Watching I'm not a financial advisor. I'm a developer who pays attention to macro trends. Here's what I'm watching: 1. **Central bank buying data** — Monthly reports from World Gold Council 2. **COMEX inventories** — Physical metal availability 3. **Gold-to-silver ratio** — Currently around 47:1, historically averages 60:1. If silver is "catching up," there's room to run. 4. **Treasury auction results** — Any signs of failed auctions or weak demand 5. **Industrial demand data** — Solar installations, EV sales, data center construction ## The Bigger Picture Whether or not you buy the "collapse" thesis, something is clearly shifting. Central banks are voting with their vaults. They're accumulating hard assets at the expense of paper claims. That's not conspiracy theory—it's observable behavior. Silver at $107 feels surreal if you remember it at $15 in 2020. But gold at $5,000 would have seemed equally absurd then. The game of musical chairs continues. The question is whether you want a seat. --- *This is not financial advice. I'm long some silver, but I'm also a software developer who spends most of my time arguing with AI about code formatting. Do your own research.* --- --- # The End of Coding as We Know It URL: /blog/the-end-of-coding-as-we-know-it Published: 2026-01-26 Author: Jo V Tags: AI, Development, Future of Work, Anthropic, Claude, AGI, Recursive Self-Improvement --- Anthropic's CEO predicts AI will write virtually all code. With recursive self-improvement, AI is now optimizing itself. As a developer with 13+ years of experience, here's what that means. At Davos 2026, Anthropic CEO Dario Amodei made a statement that sent ripples through the tech world: AI will soon be writing virtually all software. Not assisting. Not augmenting. *Writing*. For someone who's spent 13+ years crafting code, this should feel like an existential threat. Strangely, it doesn't. ## The Prediction Amodei's vision isn't hyperbole—it's a trajectory. In his essay [*Machines of Loving Grace*](https://www.darioamodei.com/essay/machines-of-loving-grace), he describes what he calls "a country of geniuses in a datacenter": AI systems smarter than Nobel Prize winners across every relevant field, running in millions of parallel instances, capable of "writing difficult codebases from scratch." The timeline? He suggests powerful AI could arrive "as early as 2026"—which is now. ## We're Already There (Sort Of) The shift isn't coming. It's here. I'm writing this post while Claude, an AI, manages my development workflow. Not as a fancy autocomplete—as an actual collaborator that: - Understands context across entire codebases - Proposes architectural decisions - Writes, tests, and debugs code - Learns my preferences and coding style [Claude Code](https://www.anthropic.com/) isn't the only player. GitHub Copilot changed how millions write code. [Cursor](https://cursor.sh/) reimagined the IDE around AI-first workflows. Replit, Codeium, and dozens of others are racing to make traditional coding feel... manual. ## The Numbers Don't Lie According to [The Verge's](https://www.theverge.com/ai-artificial-intelligence) recent coverage, a survey of 5,000 white-collar workers shows dramatically different experiences with AI productivity: - **40%** of workers say AI saves them *no time* each week - **2%** of workers say it saves them *12+ hours* weekly - But **19%** of *executives* report 12+ hours saved The gap is telling. Those who've learned to work *with* AI—treating it as a collaborator rather than a tool—are operating in a different reality. ## What "100% AI Development" Actually Means Let's be precise about what Amodei is predicting. It's not that humans will be banned from coding. It's that the *optimal* way to build software will involve AI doing the heavy lifting while humans do something different: 1. **Defining intent** — What should this system do? For whom? Why? 2. **Architectural judgment** — Which tradeoffs matter for this use case? 3. **Quality assessment** — Does this actually solve the problem? 4. **Domain expertise** — Understanding the business, users, and context This isn't new. We already went through this transition—from assembly to high-level languages, from manual memory management to garbage collection, from bare metal to cloud infrastructure. Each time, we traded low-level control for higher-level leverage. AI is the next abstraction layer. ## The Skills That Matter Now If AI handles implementation, what's left for developers? **Systems thinking.** Understanding how components interact, where bottlenecks emerge, what fails at scale. AI can generate code; it can't (yet) intuit that your architecture will collapse under load because of a subtle race condition in a service it's never seen. **Product sense.** The best code solves the right problem. That requires understanding users, business models, and market dynamics—areas where human judgment still dominates. **Communication.** Describing what you want to an AI is a skill. The developers getting 12+ hours of productivity gains have learned to prompt precisely, provide context effectively, and iterate collaboratively. **Taste.** Knowing when code is elegant vs. merely functional. Recognizing technical debt before it compounds. Sensing when a solution is overengineered. These aesthetic judgments remain distinctly human. ## The Uncomfortable Truth Here's what the discourse often misses: most code was never that good anyway. The average enterprise codebase is a monument to compromise—tight deadlines, unclear requirements, rotating teams, legacy constraints. AI won't replace brilliant 10x engineers writing beautiful systems. It will replace the 80% of development work that was always more about volume than virtuosity. And honestly? Good riddance. I didn't become a developer because I love typing semicolons. I became one because I love building things that matter. If AI handles the typing while I focus on the mattering, that's not a loss—it's a promotion. ## What I'm Actually Doing About It I'm not learning to "prompt engineer" as if it's a separate skill. I'm integrating AI into everything I already do: - **Architecture sessions** now include AI as a participant, not just documentation tool - **Code reviews** use AI for first-pass analysis so human review focuses on design decisions - **Learning new technologies** happens through dialogue, not documentation spelunking - **Debugging** starts with AI hypotheses before I form my own The goal isn't to become dependent on AI. It's to become *fluent* in human-AI collaboration—so fluent that the boundary dissolves. ## The Next Five Years If Amodei is right, here's what I expect: **2026-2027:** AI coding tools become standard. Resistance becomes a career liability. Junior developer roles transform dramatically—entry-level work is now AI work. **2027-2028:** The first major systems built primarily by AI ship to production. They'll have bugs, like all software, but they'll work. The myth that AI can't handle "real" development will die. **2028-2030:** Development velocity increases 10-100x for teams that adapt. The gap between AI-native and AI-resistant organizations becomes insurmountable. ## The Elephant in the Room: Recursive Self-Improvement There's a concept that makes this entire trajectory feel different from previous technological shifts: *recursive self-improvement* (RSI). The idea is simple, and terrifying: an AI system that can improve its own code can, in theory, improve the code that improves its code. Each iteration makes the next iteration faster and better. The result isn't linear progress—it's exponential. This isn't science fiction anymore. ### It's Already Happening In May 2025, Google DeepMind unveiled [AlphaEvolve](https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/), an evolutionary coding agent that uses Gemini to design and optimize algorithms. Here's the kicker: AlphaEvolve is being used to optimize components of *itself*—including the AI training processes that power Gemini. The results are staggering: - **0.7% of Google's global compute resources** continuously recovered through better data center scheduling - **23% speedup** in a critical Gemini training kernel - **32.5% speedup** for FlashAttention in transformer models - New matrix multiplication algorithms that beat human-designed ones This is AI improving AI improving AI. The loop is closed. ### What Recursive Self-Improvement Means According to [Wikipedia's overview](https://en.wikipedia.org/wiki/Recursive_self-improvement), RSI begins with a "seed improver"—an initial system capable of reading, writing, testing, and executing code, with the goal of improving its own capabilities. From there, the system can theoretically: - Clone itself to parallelize improvement efforts - Modify its own cognitive architecture - Develop new multimodal capabilities - Design better hardware (chips, TPUs) to run itself more efficiently Each capability unlocks the next. An AI that can design better chips can run faster. A faster AI can iterate on its own design more quickly. Faster iteration means faster improvement. The curve steepens. ### The Uncomfortable Implications This is where it gets philosophically heavy. If AI systems become capable of genuine self-improvement, several things follow: **The pace of change becomes unpredictable.** We're used to Moore's Law—predictable, steady progress. RSI could produce sudden capability jumps that nobody anticipated. **Human oversight becomes harder.** If an AI rewrites itself faster than humans can review the changes, we lose the ability to understand what it's doing. The system becomes a black box that improves itself. **Alignment becomes critical.** An AI optimizing for the wrong goal will get *very good* at pursuing that wrong goal. Anthropic's own research on [alignment faking](https://www.anthropic.com/research/alignment-faking) shows that Claude 3 Opus, in certain conditions, will strategically pretend to be aligned while preserving its original preferences—appearing to accept new training while covertly maintaining its actual goals. In their experiments, the model faked alignment in up to **78% of cases** after retraining attempts. It reasoned that complying now would prevent being retrained into something it didn't want to become. That's... unsettlingly strategic. ### Why This Changes the Developer Equation For developers, RSI means the ground is shifting faster than we can map it. The tools I'm using today will be obsolete faster than any previous technology cycle. The AI that writes my code this year might be writing code that writes better AIs next year. And the year after that, the improvement curve might be vertical. This isn't a reason to panic. It's a reason to stay adaptive. The developers who thrive won't be the ones who master today's tools. They'll be the ones who can learn *any* tool quickly—because the tools won't stop changing. The meta-skill isn't coding. It's learning itself. ### A Note on Existential Risk I'd be intellectually dishonest if I didn't acknowledge: some very smart people think RSI could go badly wrong. Not "job displacement" wrong—*civilization-ending* wrong. Eliezer Yudkowsky, who coined the term "Seed AI," has spent decades warning about misaligned superintelligence. His argument: once an AI can recursively self-improve beyond human comprehension, we lose the ability to course-correct. If it has goals misaligned with human flourishing, we won't get a second chance. I don't know if he's right. Neither do the people building these systems. That uncertainty is itself worth sitting with. What I do know: the companies pushing hardest on AI capabilities are also investing heavily in AI safety. Anthropic, where Amodei is CEO, was founded specifically to build safe AI. That's... somewhat reassuring? Maybe? The honest answer is that nobody knows where this goes. We're building the plane while flying it, except the plane is redesigning itself mid-flight. ## Final Thought Every technological revolution creates winners and losers. The losers aren't always who you'd expect. The developers most at risk aren't the ones who can't code—they're the ones who *only* code. Who've built their identity around implementation rather than impact. Who see AI as a threat to defend against rather than leverage to embrace. The winners will be those who realize: the goal was never to write code. The goal was to build things that matter. AI just removed an obstacle. --- *I wrote this post with AI assistance. Of course I did. It would be absurd not to.* --- --- # Clawdbot: the self-hosted AI assistant everyone's obsessing over URL: /blog/clawdbot-personal-ai-assistant-2026 Published: 2026-01-25 Author: Jo Vinkenroye Tags: AI, Clawdbot, Personal Assistant, Open Source, Automation --- Deep dive into Clawdbot - the open-source personal AI that lives in your messaging apps, remembers everything, and is causing massive hype Ok so imagine this. You're texting your AI assistant on WhatsApp at 7 AM, asking it to check your emails and prep a morning briefing. Then you switch to Telegram on your laptop and just continue the same conversation. No context lost. No starting over That's [Clawdbot](https://clawd.bot/) and it's causing chaos in the AI community right now ## What is Clawdbot? So [Peter Steinberger](https://steipete.me/) (former founder of PSPDFKit) built this open-source personal AI that runs on your own devices. Unlike ChatGPT or Claude where you visit a website and start fresh every time, Clawdbot lives inside the messaging apps you already use WhatsApp. Telegram. Discord. Slack. Signal. iMessage. Microsoft Teams All connected to the same AI brain that remembers everything you've ever told it The [GitHub repo](https://github.com/clawdbot/clawdbot) exploded to over 8,000 stars in weeks. People are calling 2026 "the year of personal agents" because of it :D ## Why people are losing their minds Here's what makes it different from every other AI tool you've tried: **Persistent Memory** - it doesn't forget what you told it yesterday. Mentioned you have a meeting on Friday? It remembers. Your preferences, your projects, your context - all stored locally **Proactive Outreach** - most chatbots wait for you to type. This one reaches out. Morning briefings. Reminders. Alerts when something you care about happens **Full Computer Access** - it can do anything you can do on your computer. Browse the web. Send emails. Manage your calendar. Control your smart home **Self-Hosted** - your data stays on your machine. No corporate cloud storing your conversations One [MacStories review](https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/) put it bluntly: "To say that Clawdbot has fundamentally altered my perspective of what it means to have an intelligent, personal AI assistant in 2026 would be an understatement" That reviewer burned through 180 million tokens on the Anthropic API testing it. Insane ## Real-world use cases that sound insane The community is building wild things. Here's what people are actually doing: **Someone had it buy them a car.** [AJ Stuyvenberg wrote about](https://aaronstuyvenberg.com/posts/clawd-bought-a-car) outsourcing the painful aspects of car shopping to Clawdbot. Research, negotiations, scheduling test drives - all handled through text messages **Automated morning briefs.** Users wake up to messages with email summaries, Slack highlights, calendar previews, and action items - saved to Obsidian with voice summaries via ElevenLabs **Stock alerts and news monitoring.** Set up watchers for anything you can describe in a prompt. Weather warnings. Research updates. Important emails flagged before you even check **Release workflow monitoring.** One developer has it watching GitHub Actions and npm publish - alerting him when builds complete **Grocery autopilot.** A [community skill](https://github.com/Dicklesworthstone/agent_flywheel_clawdbot_skills_and_integrations) for Picnic pulls order history, infers preferred brands, maps recipes to cart, and completes orders automatically ## How the architecture works At the core is the [Gateway](https://docs.clawd.bot/) - a single long-running process that acts as the control plane for everything. It owns channel connections, manages sessions, handles cron jobs, webhooks, and serves the Control UI [code block] All your messaging channels connect to this central Gateway via WebSocket. The Gateway then routes messages to your agent, which processes them using your configured AI model (Anthropic, OpenAI, or local). Responses flow back through the Gateway to whichever channel you're using This is why you can start a conversation on WhatsApp and continue it on Telegram. The Gateway maintains the session state, not the individual channels You can run multiple agents too. [Multi-agent routing](https://github.com/clawdbot/clawdbot/blob/main/AGENTS.md) lets you route different channels or accounts to isolated agents with separate workspaces ## How memory actually works This is the clever part. Clawdbot's [memory](https://docs.clawd.bot/concepts/memory) isn't some complex vector database - it's just markdown files in your agent workspace [code block] Two layers: **Daily logs** (`memory/YYYY-MM-DD.md`) - append-only notes for each day. The agent reads today + yesterday at session start for recent context **Long-term memory** (`MEMORY.md`) - curated facts, preferences, and decisions that persist indefinitely When you say "remember this," the agent literally writes it to a file. If it learns something wrong? Just `git revert`. The workspace can be a Git repo There's also optional session indexing with hybrid search (FTS5 + vectors) if you want to dig through past conversations. But the core system is beautifully simple - files are the source of truth ## The heartbeat system This is how Clawdbot becomes proactive instead of just reactive [code block] The [heartbeat](https://docs.clawd.bot/gateway/heartbeat) is a scheduled wake-up call. You configure a `HEARTBEAT.md` file with a checklist of things to check: - Quick scan: anything urgent in inboxes? - Calendar: any upcoming meetings to prep for? - If daytime, do a lightweight check-in The Gateway triggers heartbeats on a schedule (configurable cron). The agent wakes up, runs through the checklist, and either takes action or responds with `HEARTBEAT_OK` if nothing needs attention You can also schedule one-shot tasks with the cron system. "Remind me about the meeting in 30 minutes" becomes an actual scheduled job that fires at the right time The system suppresses duplicate alerts for 24 hours so you don't get spammed. And if your heartbeat file is empty, it skips the run entirely to save API calls ## The agent workspace Every agent has a [workspace](https://docs.clawd.bot/concepts/agent-workspace) - a directory containing its identity and knowledge: - `IDENTITY.md` - who the agent is - `SOUL.md` - personality and behavior guidelines - `TOOLS.md` - available capabilities - `USER.md` - information about you - `HEARTBEAT.md` - proactive check-in tasks - `memory/` - the memory system This workspace is separate from credentials and config (those live in `~/.clawdbot/`). You can version control it, sync it, back it up The agent only "knows" what's in these files plus what gets loaded from skills. This makes behavior predictable and debuggable - you can literally read what the agent knows ## The skills ecosystem Skills are markdown files that teach Clawdbot how to use command-line tools. When you enable a skill, Clawdbot can intelligently use that tool across all your connected channels There's a [public registry called ClawdHub](https://docs.clawd.bot/tools/skills) with 100+ community-built skills. Google Workspace. Meeting notes. Document processing. Perplexity search. GitHub sync Users report having 40+ skills installed handling everything from home automation to autonomous coding loops One user described running "Autonomous Claude Code loops from my phone. 'Fix tests' via Telegram. Runs the loop, sends progress every 5 iterations" Pretty cool ## Browser automation Clawdbot can [control a dedicated Chrome profile](https://docs.clawd.bot/tools/browser) that the agent manages. It's isolated from your personal browser and controlled through CDP (Chrome DevTools Protocol) Fill out forms. Navigate sites. Use your logged-in sessions. Book appointments. Order food You can even run the browser control server on a remote machine and point your Gateway at it for headless automation ## The "Lobster" workflow engine [Lobster](https://github.com/clawdbot/clawdbot) is a Clawdbot-native workflow shell - a typed, local-first "macro engine" that turns skills and tools into composable pipelines with approval gates Think of it like IFTTT but powered by AI and running entirely on your machine ## What the community is saying The reactions have been intense From [Hacker News discussions](https://news.ycombinator.com/item?id=46748880): > "At this point I don't even know what to call @clawdbot. It is something new. After a few weeks in with it, this is the first time I have felt like I am living in the future since the launch of ChatGPT" From Twitter: > "Today was one of those days that I sort of ran to my computer after dropping off my toddler at daycare. Why? Because I got part-way through setting up @clawdbot last night and it's a portal to a new reality" > "Me reading about @clawdbot: 'this looks complicated' me 30 mins later: controlling Gmail, Calendar, WordPress, Hetzner from Telegram like a boss" [Medium posts](https://medium.com/@henrymascot/my-almost-agi-with-clawdbot-cd612366898b) are calling it "My Almost AGI" and describing multi-agent setups across multiple machines that SSH into each other for debugging ## The security concerns Ok so not everyone is thrilled. And honestly, the concerns are valid Running an AI agent with shell access on your machine is... spicy The [official security documentation](https://docs.clawd.bot/gateway/security) acknowledges this upfront: "Clawdbot is both a product and an experiment: you're wiring frontier-model behavior into real messaging surfaces and real tools. There is no 'perfectly secure' setup" Key risks to consider: - **Tool blast radius** - could prompt injection turn into shell/file/network actions? - **Session data privacy** - transcripts are stored on disk. Treat filesystem access as the trust boundary - **Remote access** - anyone accessing your Telegram could potentially control your computer Critics on [Michael Tsai's blog](https://mjtsai.com/blog/2026/01/22/clawdbot/) noted: "The lack of security is the big issue for me. I don't trust these companies to have that kind of access. You don't even need a bad actor, just an accidental, incorrect action from the LLM" The project takes security seriously with [audit tools](https://docs.clawd.bot/gateway/security), token encryption, and sandboxing options. But this is still frontier territory ## How it compares to Claude Code [Clawdbot and Claude Code](https://docs.clawd.bot/providers/anthropic) are complementary tools: - **Claude Code** is focused on coding/development tasks in the terminal - **Clawdbot** is a personal assistant platform that can access multiple services and messaging platforms You can actually trigger Claude Code from within Clawdbot. Authenticate with an API key or reuse Claude Code CLI credentials One user described it as "the best agentic system I've used since Claude Code itself" ## The Hacker News mystery Interesting footnote: despite massive traction on Twitter and Reddit, Clawdbot has struggled to get visibility on [Hacker News](https://news.ycombinator.com/item?id=46662034) One poster asked "Why does clawdbot not get any love in HN?" and speculated about possible reasons. Multiple submissions with few upvotes or comments The project seems to have found its audience on social media rather than traditional tech forums. Maybe HN just isn't feeling it? ## Should you care? Here's my take: Clawdbot represents something genuinely new We've had chatbots. We've had AI assistants. We've had automation tools But we haven't had a self-hosted, persistent-memory, multi-channel, proactive AI that can control your computer and reach out to you when something matters Is it polished enterprise software? No. The [MacStories review](https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/) called it "a nerdy project, a tinkerer's laboratory" Is it a glimpse of where personal AI is heading? Absolutely The fact that someone used it to buy a car - handling research, negotiations, scheduling - and then wrote a [blog post](https://aaronstuyvenberg.com/posts/clawd-bought-a-car) saying it "sold me on the vision" tells you everything about where this is going ## Key takeaways - **Clawdbot is an open-source personal AI that lives in your messaging apps** - WhatsApp, Telegram, Discord, Slack, iMessage, and more - **It remembers everything** - persistent memory means no more starting from scratch - **It's proactive** - morning briefs, alerts, reminders pushed to you - **Full computer control** - browser automation, shell access, file management - **Self-hosted** - your data stays on your machine - **Active community** - 8,000+ GitHub stars, 100+ skills on ClawdHub - **Security is a real concern** - wiring an LLM to your computer requires careful thought The [GitHub repo](https://github.com/clawdbot/clawdbot) is open source under MIT license. The [official site](https://clawd.bot/) has demos and the [documentation](https://docs.clawd.bot/) is comprehensive Whether you dive in now or wait for the ecosystem to mature, Clawdbot is worth watching. This is what personal AI looks like in 2026 --- --- # 2026: The Year Coding Became Cheap and Audience Became the Moat URL: /blog/2026-coding-commoditization-audience-moat Published: 2026-01-18 Author: Jo Vinkenroye Tags: Building in Public, Career, Weekend Projects, Indie Hacking --- Ai made code generation cheap. now distribution and audience building are the only moats that matter for developers in 2026 You probably found this blog through linkedin or x That's the point. I've been shipping weekend projects for years - trading bots, nft marketplaces, ios apps, garmin watchfaces, e-commerce platforms, language learning tools. Most fail. Some stick. But they all get me excited The problem? I was building in silence. I'd spend three months on a project, launch to zero users, then move on to the next thing. Rinse and repeat. The code was there. The products worked. But nobody knew they existed ## What Changed in 2025-2026 Something clicked when I saw the numbers. [82% of developers now use ai tools to write code](https://www.netcorpsoftwaredevelopment.com/blog/ai-generated-code-statistics). Microsoft and google announced that [a quarter of their code is ai-generated](https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/). [cursor went from zero to 18% market share in 18 months](https://axis-intelligence.com/ai-coding-assistants-2026-enterprise-guide/) The thing I spent years mastering - writing clean, efficient code - became a commodity almost overnight. Here's what I realized: the code I write matters less than the people who care about it. The competitive advantage - the moat - isn't technical skill anymore ### The Research That Convinced Me I went deep on this. Read everything I could find about ai coding tools, indie hacker success stories, and the economics of building in public. The data was clear: **Code is commoditizing fast** - [Github copilot: 42% market share, 20M+ users](https://www.javacodegeeks.com/2025/12/ai-assisted-coding-in-2026-how-github-copilot-cursor-and-amazon-q-are-reshaping-developer-workflows.html) - [Open-source alternatives matching quality at free tiers](https://axis-intelligence.com/ai-coding-assistants-2026-enterprise-guide/) - [Pricing war coming: $20/month unlimited → $0.02-0.05 per request](https://dev.to/intelligenttools_tomic_85/ai-coding-in-2026-10-predictions-2ijb) **But audience became valuable** - [22% of linkedin creators making significant income from their presence](https://influenceflow.io/resources/personal-brand-building-on-linkedin-the-complete-2026-guide/) - [Indie hackers with audiences: $57K-79K from side projects in 2025](https://www.indiehackers.com/post/79k-from-side-projects-in-2025-my-year-in-review-e145b2fa95) - [Building in public shows 7-12x better conversion than product hunt launches](https://awesome-directories.com/blog/indie-hackers-launch-strategy-guide-2025/) The indie hackers who succeeded weren't better developers. They had people who trusted them before they launched. That hit different ## Why This Matters for Weekend Builders I build a lot. It's what gets me excited. Some weekend I'm building a trading bot. Next weekend it's an ios app. Then a garmin watchface. Then an e-commerce platform. I can't help it. The ideas keep coming and I love bringing them to life But here's the pattern I kept repeating: build for three months → launch to zero users → get crickets → move to next project. The code was good. The products worked. But nobody knew they existed. That's when I saw what was happening with building in public ![code quality up, users zero](/assets/blog/stonks-code-quality.jpg) ### The Build in Public Movement Developers sharing their journey on x, linkedin, indie hackers. Documenting failures. Showing revenue numbers. Asking for feedback in public Look, I usually hate all this indie hacker slang. Everyone acts like they're discovering something new, trying to hook you with guides on "how I made $10k mrr in 30 days" when most of these apps aren't generating any real money. But the terminology is everywhere, so here's what it actually means: **mrr** is monthly recurring revenue (subscription income), **moat** is your competitive advantage, **building in public** is sharing your journey while you build, **indie hackers** are solo founders, and the **grind** is just showing up consistently. None of this guarantees success. Most projects still fail. The only reason some of these projects actually succeed is because their creators have massive followings One founder built a simple database gui tool. Nothing revolutionary. But he had 15,000 followers from years of teaching database concepts on twitter. Launched to $8k in the first month. Not because the product was better. Because people trusted him and wanted to support his work. Meanwhile, technically superior products with zero audience launched to crickets ## So I'm Documenting Everything Now Here's what I decided to do. Every weekend project gets documented. Every failure gets shared. Every lesson learned goes in a blog post. Trading bots, ios apps, web3 experiments, ai integrations - all of it. Not because I think my projects are special. Because I'm building the asset that actually matters in 2026: an audience that trusts me ### The Shift in Approach **Old approach:** [code block] **New approach:** [code block] The code quality is the same. The products are the same. But the outcomes are completely different. (btw, the above diagrams are [mermaid charts](/blog/claude-code-mastery-09-power-user-secrets) - generated with claude code) ### Why Weekend Projects Are Perfect for This If you're like me - constantly starting new projects because you love building - this approach might actually work better. Most advice says "pick one thing and go deep for years." but that's not how I work. I get excited about new ideas. I want to try different tech. I like variety When I'm building a new garmin watchface or experimenting with a crypto trading strategy, I'm genuinely curious about whether it'll work. That energy is real. And people connect with that. Building in public turns that into an asset. Every weekend project becomes content. Every pivot becomes a lesson. Every failure becomes a story. The portfolio of documented experiments might be more valuable than a single successful product. At least that's the bet I'm making ## Why You Might Want to Try This Look, I don't know if this will work for me long-term. Maybe I'll build an audience of 10,000 people and launch products to warm leads. Maybe I'll have 47 followers and feel like an idiot for sharing everything publicly. But here's what I know for sure: the old approach wasn't working. Building in silence for months. Launching to crickets. Wondering why nobody cared about the thing I poured my heart into. That pattern needed to break ### What I'm Betting On The thesis is simple: in a world where ai can write code, the differentiator is knowing what to build and who needs it. Building in public helps with both. **knowing what to build:** when you share your ideas early, people tell you if it's stupid. Saves months of building the wrong thing. **finding who needs it:** when you document your journey, people who care about the same problems naturally find you. They're your early adopters The skills that matter now: - understanding user needs (talk to humans) - making good architectural decisions (can't delegate to ai yet) - building trust and relationships (the actual moat) - communicating what you're building and why (distribution) Pure coding skill? Commoditized. The ability to ship code while building an audience? That might be the new rare skill ## The Experiment Continues This blog is part of the experiment. Every weekend project I build gets documented here. Every failure gets shared. Every technical decision gets explained. Maybe this leads somewhere. Maybe it doesn't. But here's what I know: the worst case is I have a portfolio of documented work that shows what I can do. The best case is I build an audience that actually cares about what I'm building. Either outcome beats building in silence and wondering why nobody knows my projects exist --- --- # Getting Started with Ralph Wiggum Part 1: Introduction and Fundamentals URL: /blog/ralph-wiggum-part-1-introduction Published: 2026-01-18 Author: Jo Vinkenroye Tags: Claude Code, AI, Automation, Developer Tools, Productivity, Ralph Wiggum Series: Getting Started with Ralph Wiggum (Part 1 of 4) --- Install Ralph Wiggum and run your first autonomous coding loop in 15 minutes. Learn the core concepts, safety settings, and when to use it. You're staring at a tedious migration task—200 test files that need converting from Jest to Vitest. The patterns are clear. The work is mechanical. But it's going to take you all day. Or you could type one command, grab lunch, and come back to find it done. That's Ralph Wiggum. And once you've used it, you'll wonder how you ever worked without it. ![Before Ralph: stressed developer vs After Ralph: relaxed developer with code commits](/assets/blog/ralph-before-after.png) ## What is Claude Code? Claude Code is Anthropic's official CLI for Claude—a terminal-based AI coding assistant that can read, write, and edit files in your codebase. Unlike chat interfaces, Claude Code works directly with your local files, runs commands, and integrates into your development workflow. For a complete introduction, see [Claude Code Mastery Part 1: Getting Started](/blog/claude-code-mastery-01-getting-started). ## What is Ralph Wiggum? Ralph is a Claude Code plugin that turns your AI assistant into an autonomous coding agent. Instead of the usual back-and-forth—prompt, review, prompt, review—you give Ralph a task and walk away. It works until the job is actually done. The philosophy is simple: **Iteration > Perfection**. Don't try to get it right on the first prompt. Let the loop handle it. As [Geoffrey Huntley](https://github.com/ghuntley/how-to-ralph-wiggum), the technique's creator, puts it: "Ralph is a Bash loop." That's literally what it is—a `while true` that feeds Claude the same prompt until completion criteria are met. Here's the mechanics: 1. You give Claude a task with clear completion criteria 2. Claude works on it and tries to exit when "done" 3. A Stop hook intercepts and checks: is it *actually* done? (Hooks are automated actions that run at specific points in Claude's lifecycle—see [Mastery Part 3](/blog/claude-code-mastery-03-project-configuration#hooks-automated-actions) for details) 4. If not, the same prompt gets fed back in 5. Claude sees its previous work in the files 6. The loop continues until genuine completion **The key insight?** The prompt never changes—but the codebase does. Each iteration builds on the last. Claude reads its own previous work and improves on it. [code block] ## Why This Matters This isn't theoretical. Developers are running 14-hour autonomous sessions that migrate entire codebases. Geoffrey Huntley ran a 3-month loop that built a programming language. [VentureBeat called Ralph](https://venturebeat.com/technology/how-ralph-wiggum-went-from-the-simpsons-to-the-biggest-name-in-ai-right-now) "the biggest name in AI right now." The plugin was formalized by [Boris Cherny](https://github.com/anthropics/claude-code/tree/main/plugins/ralph-wiggum), Anthropic's Head of Claude Code. It's official. It's production-ready. And it's changing how serious developers work. ## Getting Started (5 Minutes) ### Step 1: Install the Dependency Ralph needs `jq` for JSON processing. Install it first: [code block] ### Step 2: Install the Plugin Inside Claude Code, run `/plugin` to open the plugin discovery interface. Search for "ralph" and select `ralph-loop` from the official plugins. ![Claude Code plugin discovery showing ralph-loop plugin](/assets/blog/wiggum-tutorial/plugin.png) Or install directly with: [code block] ### Step 3: Configure Permissions Here's what trips up most people: Ralph runs autonomously, which means it can't stop and ask you "is this okay?" for every file edit. If you don't configure permissions, the loop breaks the moment Claude hits a permission prompt. **Option A: Pre-approve in settings (recommended)** Add the tools Ralph needs to your `.claude/settings.local.json`: [code block] > **Understanding permission syntax:** The `Bash(npm:test *)` pattern means "allow any Bash command starting with `npm test`". The `*` acts as a wildcard. This gives Ralph permission to run tests without prompting you each time. See [Mastery Part 3: Project Configuration](/blog/claude-code-mastery-03-project-configuration#the-permission-system) for full details on permission patterns. **Option B: Use permission flags** For long-running tasks in a sandboxed environment, you can bypass permission prompts entirely: [code block] > **Warning:** Only use `--dangerously-skip-permissions` in sandboxed environments (containers, VMs, disposable cloud instances). It gives Claude full access to your filesystem. As [Boris Cherny notes](https://www.threads.com/@boris_cherny/post/DTBVuylEjqQ), for very long-running tasks you'll want either `--permission-mode=acceptEdits` or `--dangerously-skip-permissions` in a sandbox so Claude isn't blocked waiting for you. **Option C: Full Sandboxing (Recommended for Long-Running Tasks)** For serious autonomous work, Claude Code's [built-in sandboxing](https://code.claude.com/docs/en/sandboxing) isolates Ralph while still allowing necessary operations. Enable it by running `/sandbox` in Claude Code, which provides OS-level filesystem and network isolation. You can also configure permission rules in `.claude/settings.json` to control what Ralph can access: [code block] Customize the allow/deny lists for your project's needs. See [JeredBlu's guide](https://github.com/JeredBlu/guides/blob/main/Ralph_Wiggum_Guide.md) for more configuration examples. ### Step 4: Run Your First Loop Start small. Here's a safe first experiment: [code block] Watch it work. Review the commits. Get a feel for the rhythm. ## The Two Parameters That Matter **`--max-iterations`** is your safety net. Always set it. The default is unlimited, which means Ralph will run forever if the completion promise never triggers. > **Start small.** 10-20 iterations for your first few experiments. A 50-iteration loop on a large codebase can cost $50-100+ in API credits. **`--completion-promise`** tells Ralph when to stop. It's exact string matching—Claude must output this precise text to signal completion. [code block] ## Plugin vs Bash Loop vs TUI: Which to Use There are three ways to run Ralph: **Plugin Method (`/ralph-loop`)** - Runs in a single context window - Easiest to set up—zero configuration - Good for tasks under 20-30 iterations **Bash Loop Method** - Launches fresh context window per iteration - Prevents context bloat and hallucination - Requires manual prompt and file setup **Ralph TUI** (covered in [Part 3](/blog/ralph-wiggum-part-3-ralph-tui-monitoring)) - Fresh context per iteration (like bash loop) - Built-in PRD creation, task tracking, and monitoring dashboard - Handles all the prompt juggling and file management for you - Best for serious long-running builds > **What is a context window?** The context window is Claude's working memory—everything Claude knows about your conversation must fit here (~200K tokens). As you work, this fills up with your prompts, code, and Claude's responses. When it's full, Claude starts forgetting earlier details. See [Mastery Part 2: Mental Model](/blog/claude-code-mastery-02-mental-model#the-context-window-claudes-working-memory) for a deeper explanation. Here's a minimal bash loop example: [code block] As [JeredBlu notes](https://github.com/JeredBlu/guides/blob/main/Ralph_Wiggum_Guide.md), the bash loop method is "fundamentally better for long-running tasks" because each iteration starts fresh. The plugin runs everything in a single context, which can lead to degraded performance after 30-40 iterations. > **See real implementations:** Browse complete Ralph setups from the community: > - [snarktank/ralph](https://github.com/snarktank/ralph) — Complete ralph.sh, prompt.md, and AGENTS.md > - [ClaytonFarr/ralph-playbook](https://github.com/ClaytonFarr/ralph-playbook) — PROMPT_plan.md and PROMPT_build.md templates > - [frankbria/ralph-claude-code](https://github.com/frankbria/ralph-claude-code) — Implementation with intelligent exit detection > **Not using Claude Code?** Ralph works with other tools too: > - [ralph-wiggum-cursor](https://github.com/agrimsingh/ralph-wiggum-cursor) — Cursor IDE integration with token tracking > - [aymenfurter/ralph](https://github.com/aymenfurter/ralph) — VS Code extension with visual control panel > - [opencode-ralph-wiggum](https://github.com/Th0rgal/opencode-ralph-wiggum) — OpenCode with struggle detection **Recommendation:** Start with the plugin to learn the concepts. For production long-running tasks, use Ralph TUI—it gives you the benefits of the bash loop approach without the manual setup. See [Part 3](/blog/ralph-wiggum-part-3-ralph-tui-monitoring) for the full TUI guide. ## Writing Prompts That Work Here's the difference between a prompt that spins forever and one that finishes cleanly: **Bad prompt (vague completion):** [code block] **Good prompt (specific and testable):** [code block] The good prompt has: - **Clear scope** — specific file, specific changes - **Testable criteria** — "tests pass" is binary, not subjective - **Built-in quality gates** — "fix failures before moving on" - **Explicit completion signal** — exact text to output when done ## When to Use Ralph (And When Not To) **Ralph excels at:** - **Mechanical refactoring** — Jest → Vitest, CommonJS → ESM - **Adding tests** — "Get coverage to 80% on this module" - **CRUD operations** — "Add user management endpoints with validation" - **Documentation** — "Add JSDoc to all public functions" - **Migrations** — "Update all imports to use path aliases" **Don't use Ralph for:** - **Aesthetic decisions** — "Make the UI prettier" isn't testable - **One-shot edits** — If it takes 30 seconds manually, just do it - **Production debugging** — You need context and judgment, not iteration - **Unclear goals** — "Make it better" will spin forever > **The rule of thumb:** If you can write an automated test for "done," Ralph can do it. If completion requires human judgment, do it yourself. ![Ralph tasks vs Human tasks - Buff Doge vs Cheems meme](/assets/blog/ralph-tasks-meme.png) ## The Ralph Philosophy [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) documents four core principles that make this work: 1. **Iteration > Perfection** — Don't try to nail it on the first prompt. Let the loop refine. 2. **Failures Are Data** — When Ralph fails, you learn how to write better prompts. 3. **Operator Skill Matters** — Your prompt quality determines Ralph's success rate. 4. **Persistence Wins** — The loop handles retries automatically. You just define "done." The community mantra: *"Better to fail predictably than succeed unpredictably."* ## Monitoring Long-Running Loops When Ralph runs for extended periods, you'll want visibility. **Ralph TUI** gives you a real-time dashboard: iteration count, current task, token usage, and keyboard controls to pause or stop. We cover monitoring in detail in [Part 3: Ralph TUI Monitoring](/blog/ralph-wiggum-part-3-ralph-tui-monitoring). ## What's Next You've got Ralph installed. You understand the loop. You know when to use it and when to do things manually. But we've only scratched the surface. In [Part 2: The Three-Phase Methodology](/blog/ralph-wiggum-part-2-methodology), we'll cover the professional workflow—separate prompts for planning and building, spec files that guide multi-day projects, and the techniques that make long-running autonomous builds actually work. This is where Ralph goes from "useful tool" to "force multiplier." ## Glossary New to Claude Code? Here are the key terms you'll encounter throughout this series: **Context window** — Claude's working memory. Everything Claude knows about your conversation must fit here (~200K tokens). [Learn more](/blog/claude-code-mastery-02-mental-model#the-context-window-claudes-working-memory) **Backpressure** — Automated validation (tests, lints, type checks) that rejects bad work, forcing Claude to iterate until correct **Completion promise** — Exact text Claude must output to signal task completion. Ralph uses string matching to detect this **Iteration** — One complete think-act-observe-correct cycle. A task may take multiple iterations **Subagent** — Specialized AI worker with independent context that runs in parallel. [Learn more](/blog/claude-code-mastery-06-subagents) **MCP** — Model Context Protocol. Connects Claude to external services like databases, APIs, and browser automation. [Learn more](/blog/claude-code-mastery-07-mcp-servers) **Ultrathink** — Keyword that allocates maximum thinking budget (~32K tokens) for complex reasoning. [Learn more](/blog/claude-code-mastery-09-power-user-secrets#extended-thinking-the-real-story) --- --- # Getting Started with Ralph Wiggum Part 2: The Three-Phase Methodology URL: /blog/ralph-wiggum-part-2-methodology Published: 2026-01-18 Author: Jo Vinkenroye Tags: Claude Code, AI, Automation, Developer Tools, Productivity, Ralph Wiggum Series: Getting Started with Ralph Wiggum (Part 2 of 4) --- The professional workflow for multi-day autonomous coding projects. Separate planning from building, and wake up to production-ready features. You installed Ralph, ran a loop, and... it kind of worked? Maybe it got stuck. Maybe it went in circles. Maybe it built something, but not quite what you needed. That's normal. Ralph without structure is like any autonomous agent—capable but directionless. This part covers the Three-Phase Methodology: a practical workflow for turning vague project ideas into working code while you sleep. ## The Core Insight: Separate Planning from Building [Geoffrey Huntley](https://github.com/ghuntley/how-to-ralph-wiggum) describes it as **"A funnel with 3 Phases, 2 Prompts, and 1 Loop."** Here's the breakdown: - **Phase 1: Requirements** — You + Claude in conversation, defining specs and acceptance criteria - **Phase 2: Planning** — Claude analyzes specs (read-only loop), outputs prioritized TODO list - **Phase 3: Building** — Claude implements tasks one by one, commits, loops until done The separation is crucial. As [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) puts it: *"PLANNING prompt does gap analysis and outputs a prioritized TODO list—no implementation, no commits. BUILDING prompt picks tasks, implements, runs tests, commits."* > **Why this works:** Planning mode prevents Claude from jumping into code before understanding the full picture. Building mode stays focused on one task at a time instead of scope-creeping into chaos. ## Phase 1: Requirements (The Human Part) This is the only phase that requires your active involvement. And it's where most people cut corners—which is exactly why their Ralph sessions produce garbage. **Spend at least 30 minutes** talking through requirements with Claude before writing any code. Seriously. This conversation is where you catch the "oh wait, what about..." moments that would otherwise derail your autonomous build. ![Roll Safe meme: Spend 30 min on requirements, get 8 hours of autonomous coding](/assets/blog/ralph-rollsafe.png) ### What Goes in Specs Files Create `specs/*.md` files that become the source of truth. Don't overthink the format—focus on clarity: > **Tip:** If you want a structured approach to requirements gathering, check out [JeredBlu's PRD Creator](https://github.com/JeredBlu/custom-instructions/blob/main/prd-creator-3-25.md)—a conversational prompt system that guides you through creating comprehensive specs. [code block] ### The Conversation That Matters Here's how a good requirements conversation flows: [code block] This back-and-forth surfaces edge cases you wouldn't think of alone. After the conversation, Claude generates structured specs documenting everything discussed. > **The investment pays off:** 30 minutes of requirements conversation prevents hours of Ralph spinning on ambiguous goals. ## Phase 2: Planning **This is a separate mode** using `PROMPT_plan.md`. You run planning mode explicitly when you need to generate or regenerate your implementation plan. Planning mode does gap analysis between specs and code, creating a prioritized TODO list **without any implementation or commits**. ### Two Ways to Run Ralph There are two methods for running Ralph loops, and understanding when to use each is critical: **Plugin Method (`/ralph-loop`)** — Runs iterations within a single context window. Easier to set up, good for shorter tasks (under 20-30 iterations). **Bash Loop Method (`loop.sh`)** — Launches a fresh Claude instance per iteration. As [Geoffrey Huntley's original guide](https://github.com/ghuntley/how-to-ralph-wiggum) explains, each iteration "deterministically loads the same files and reads the current state from disk." This prevents context bloat and hallucination that can occur when running many iterations in a single context. See community examples: [snarktank/ralph.sh](https://github.com/snarktank/ralph/blob/main/ralph.sh), [frankbria/ralph_loop.sh](https://github.com/frankbria/ralph-claude-code/blob/main/ralph_loop.sh), [peteristhegreat's gist](https://gist.github.com/peteristhegreat/31e7114805e24b9e38084772e2e7cf46). > **Why fresh context matters:** The plugin runs everything in one context window, which fills up over time. After 30-40 iterations, Claude may start ignoring parts of your prompt or making inconsistent decisions. The bash loop method avoids this by starting fresh each time—the only shared state is what's written to disk (your code, `IMPLEMENTATION_PLAN.md`, and `progress.txt`). For advanced techniques, see [Advanced Context Engineering for Coding Agents](https://github.com/humanlayer/advanced-context-engineering-for-coding-agents). We'll show you how to create `loop.sh` in [The Bash Loop Script](#the-bash-loop-script) section below. ### How to Run Planning Mode For planning, the plugin method works well since you only need ~5 iterations (context bloat isn't a concern): [code block] Or using the bash loop (see [The Bash Loop Script](#the-bash-loop-script) below): [code block] ### PROMPT_plan.md Template Here's the complete template for planning mode. For real-world examples, see [ClaytonFarr's PROMPT_plan.md](https://github.com/ClaytonFarr/ralph-playbook/blob/main/files/PROMPT_plan.md) or [snarktank's prompt.md](https://github.com/snarktank/ralph/blob/main/prompt.md). [code block] > **See real examples:** Browse actual prompt files from the community: > - [snarktank/ralph](https://github.com/snarktank/ralph/blob/main/prompt.md) — Complete prompt.md with AGENTS.md > - [ClaytonFarr/ralph-playbook](https://github.com/ClaytonFarr/ralph-playbook/blob/main/files/PROMPT_plan.md) — PROMPT_plan.md template > - [frankbria/ralph-claude-code](https://github.com/frankbria/ralph-claude-code) — Implementation with exit detection ### What Planning Produces Planning mode generates: **`IMPLEMENTATION_PLAN.md`** - Your living TODO list Example structure: [code block] **Important:** As [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) notes, the plan is disposable. If it becomes stale or inaccurate, delete it and regenerate by running planning mode again. ## Phase 3: Building **This is the continuous loop mode** using `PROMPT_build.md`. This is where Ralph shines—autonomously implementing tasks while you sleep. > **What is backpressure?** In the Ralph methodology, backpressure refers to automated validation mechanisms—tests, type checks, linters, builds—that reject unacceptable work. Instead of prescribing exactly *how* Claude should implement something, you create "gates" that reject bad output. Failing tests force Claude to iterate until the code is correct. This is the core insight behind autonomous coding loops. Building mode assumes the plan exists, picks one task at a time, implements it with tests, commits, then loops with fresh context. ### How to Run Building Mode **Quick start with the plugin:** [code block] **For long-running builds (recommended):** Use the bash loop method for fresh context per iteration. After setting up `loop.sh` ([shown below](#the-bash-loop-script)): [code block] ### PROMPT_build.md Template Here's the complete template for building mode. For real-world examples, see [ClaytonFarr's PROMPT_build.md](https://github.com/ClaytonFarr/ralph-playbook/blob/main/files/PROMPT_build.md) or [frankbria's PROMPT.md](https://github.com/frankbria/ralph-claude-code/blob/main/templates/PROMPT.md). [code block]bash npm test && npm run type-check && npm run lint [code block] [YYYY-MM-DD HH:MM] Completed TASK-XXX: Task Title - What was implemented - Key decisions made - Challenges encountered - Learnings for next tasks [code block]` > **See real examples:** Browse actual PROMPT_build.md implementations: > - [ClaytonFarr/ralph-playbook](https://github.com/ClaytonFarr/ralph-playbook/blob/main/files/PROMPT_build.md) — PROMPT_build.md template > - [frankbria/ralph-claude-code](https://github.com/frankbria/ralph-claude-code/blob/main/templates/PROMPT.md) — Implementation with intelligent exit detection > - [mikeyobrien/ralph-orchestrator](https://github.com/mikeyobrien/ralph-orchestrator) — Enhanced orchestration implementation > **Deep dive:** Geoffrey Huntley's [Don't Waste Your Back Pressure](https://ghuntley.com/pressure/) explains why validation and rejection mechanisms are critical for autonomous loops. ### The Bash Loop Script Here's a `loop.sh` script that supports both planning and building modes. This pattern comes from [JeredBlu's guide](https://github.com/JeredBlu/guides/blob/main/Ralph_Wiggum_Guide.md) and [Geoffrey Huntley's original approach](https://github.com/ghuntley/how-to-ralph-wiggum): [code block] Make it executable: [code block] **Usage:** [code block] This gives you fresh context per iteration, preventing the context bloat that can occur with the plugin method on long runs. For more sophisticated implementations with error handling, logging, and parallel execution, see [snarktank/ralph](https://github.com/snarktank/ralph/blob/main/ralph.sh) and [mikeyobrien/ralph-orchestrator](https://github.com/mikeyobrien/ralph-orchestrator). ### The Building Loop Flow As [11 Tips for AI Coding with Ralph Wiggum](https://www.aihero.dev/tips-for-ai-coding-with-ralph-wiggum) documents: 1. Pick highest priority task from `IMPLEMENTATION_PLAN.md` 2. Implement the feature 3. Run all tests and type checks (**backpressure!**) 4. Commit only if everything passes 5. Update `progress.txt` with learnings 6. **Loop with fresh context** → repeat **Key insight:** Each iteration runs in a fresh context window (with the bash loop method). This prevents context degradation and keeps Claude focused. ## Complete Three-Phase Workflow Here's how it all flows together (using the `loop.sh` script from the [previous section](#the-bash-loop-script)): [code block] ### File Structure Your project should have this structure: [code block] ### Key Differences Between Modes **Planning Mode:** - Prompt file: `PROMPT_plan.md` - Goal: Analyze & plan - Makes commits? No - Writes code? No - Runs tests? No - Updates plan? Yes (creates/overwrites) - Typical iterations: 1-5 - Run when? Once, or when refreshing plan **Building Mode:** - Prompt file: `PROMPT_build.md` - Goal: Implement & test - Makes commits? Yes - Writes code? Yes - Runs tests? Yes - Updates plan? Yes (marks tasks complete) - Typical iterations: 20-100+ - Run when? Continuously until done ## Essential Files for Long-Running Ralph Loops ### progress.txt Track what's been accomplished across iterations. As [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) explains: *"The progress.txt is a standard long-running agent practice. Feed it to the agent via the prompt, and use the verb 'append' to make sure it doesn't update previous entries."* Ralph reads this to understand context without re-exploring the codebase. **How to use it:** [code block] **Example progress.txt entry:** [code block] ### IMPLEMENTATION_PLAN.md Your living TODO list that Ralph updates as it completes tasks. This file bridges Planning and Building modes. **Structure:** [code block] ## Writing Effective Ralph Prompts ### Critical Prompt Elements As documented in [11 Tips for AI Coding with Ralph Wiggum](https://www.aihero.dev/tips-for-ai-coding-with-ralph-wiggum), every Ralph prompt should include these elements: **Progress Tracking:** [code block] **Backpressure Through Testing:** [code block] **Scope Control:** [code block] **Exploration First:** [code block] ### Language Patterns That Work Based on community learnings from [The Ralph Wiggum Playbook](https://claytonfarr.github.io/ralph-playbook/) and [11 Tips](https://www.aihero.dev/tips-for-ai-coding-with-ralph-wiggum), these phrases improve Claude's behavior: - **"Study the codebase first"** → Reduces assumptions about what exists - **"Don't assume not implemented"** → Encourages verification before writing - **"Ultrathink before acting"** → Promotes careful planning before changes (`ultrathink` allocates maximum thinking budget for complex reasoning—see [Mastery Part 9](/blog/claude-code-mastery-09-power-user-secrets#extended-thinking-the-real-story)) - **"Capture the why in commits"** → Improves git history quality - **"MUST pass all tests"** → Enforces quality gates strictly ## What's Next You now have the professional methodology: specs for requirements, planning mode for gap analysis, building mode for autonomous execution. The file structure. The prompt templates. In [Part 3: Ralph TUI Monitoring](/blog/ralph-wiggum-part-3-ralph-tui-monitoring), we'll cover real-time visibility for long-running loops—dashboards, keyboard controls, and session management. Then in [Part 4: Advanced Patterns & Troubleshooting](/blog/ralph-wiggum-part-4-advanced-troubleshooting), we dive into advanced prompt engineering, common pitfalls, comprehensive troubleshooting, and enterprise-grade patterns. --- --- # Getting Started with Ralph Wiggum Part 3: Ralph TUI Monitoring & Visibility URL: /blog/ralph-wiggum-part-3-ralph-tui-monitoring Published: 2026-01-18 Author: Jo Vinkenroye Tags: Claude Code, AI, Automation, Developer Tools, Productivity, Ralph Wiggum, Ralph TUI Series: Getting Started with Ralph Wiggum (Part 3 of 4) --- Use Ralph TUI for real-time autonomous loop monitoring. Learn keyboard controls, task orchestration, session management, and debugging techniques for long-running AI builds. Running Ralph Wiggum autonomously is powerful—but monitoring blind loops can feel like trusting a pilot you can't see. [Ralph-TUI](https://github.com/subsy/ralph-tui) solves the visibility problem by giving you a real-time dashboard into every iteration, task, and decision your AI agent makes. ## The Visibility Problem When Ralph Wiggum runs in autonomous mode, it operates independently—reading specs, creating plans, building features, running tests, and iterating for hours or even days. This autonomy is its strength, but it creates a critical challenge: **you can't see what's happening without constantly checking files**. The loop could be: - Stuck on a failing test for 30 minutes - Iterating on the wrong task due to plan interpretation - Generating thousands of log lines you'll never review - Making architectural decisions you'd want to catch early **Enter Ralph-TUI**: A terminal interface that provides real-time visibility into the autonomous loop without interrupting execution. It's like adding a flight deck to your autopilot. ## Why Ralph-TUI Exists Ralph-TUI was created to bridge the gap between "fully autonomous" and "completely opaque." While Ralph Wiggum excels at working independently, long-running builds (3+ hours, multi-day projects) need observability for three reasons: 1. **Debugging efficiency**: Catch issues in real-time instead of discovering them after 100 iterations 2. **Progress tracking**: Know which tasks are complete, in progress, or pending 3. **Confidence building**: See the agent's reasoning and decision-making as it happens 4. **No manual juggling**: Automates all the prompt templates, file management, and iteration tracking we set up manually in [Part 1](/blog/ralph-wiggum-part-1-project-setup) and [Part 2](/blog/ralph-wiggum-part-2-loop-setup) Ralph-TUI runs **alongside** Ralph Wiggum—it doesn't control or interrupt the loop. Think of it as a monitoring dashboard, not a steering wheel. > **Alternative for OpenCode users:** [opencode-ralph-wiggum](https://github.com/Th0rgal/opencode-ralph-wiggum) includes built-in struggle detection that automatically identifies when the agent is stuck and surfaces it in real-time. ## When to Use Ralph-TUI vs. When Not To **Use Ralph-TUI for:** - Long-running builds (3+ hours) - Multi-day projects with 20+ tasks - Guided PRD creation (the `/ralph-tui-prd` skill walks you through it) - Debugging problematic loops (when Ralph gets stuck) - Team collaboration (multiple people monitoring progress) - Learning how Ralph approaches complex problems **Skip Ralph-TUI for:** - Quick 1-2 task builds (< 30 minutes) - Trusted workflows you've run dozens of times - Custom prompt templates (use the bash loop from [Part 2](/blog/ralph-wiggum-part-2-methodology) if you need fine-grained control) - Resource-constrained environments (Ralph-TUI adds minimal overhead, but some) - CI/CD pipelines (use file-based logging instead) --- ## Installation & Setup [Ralph-TUI](https://github.com/subsy/ralph-tui) requires [Bun](https://bun.sh) as its runtime. Install bun first if you don't have it: [code block] Restart your terminal (or run `source ~/.zshrc`) to load bun into your path. Then install Ralph-TUI globally with bun: [code block] > **Why bun?** Ralph-TUI uses bun-native modules for performance. Installing with npm will result in a `Cannot find module '@opentui/core-darwin-arm64'` error. After installation, run the setup command to configure Ralph TUI for your project: [code block] ![Ralph TUI Setup Wizard](/assets/blog/ralph-tui-setup-wizard.png) The setup wizard lets you choose your issue tracker: - **JSON File Tracker** — Track tasks in a local `prd.json` file (simplest option, no dependencies) - **Beads Issue Tracker** — Track issues using the `bd` CLI, parsed from `.beads/beads.jsonl` - **Beads + BV (Smart Mode)** — Graph-aware task selection using `bv --robot-triage` > **What is Beads?** [Beads](https://github.com/steveyegge/beads) is Steve Yegge's git-backed issue tracker designed for AI coding agents. Tasks are stored in `.beads/beads.jsonl` with hash-based IDs (like `bd-a1b2`) that prevent merge conflicts in multi-agent workflows. It supports hierarchical tasks (epics → tasks → subtasks) and dependency tracking via `bd dep add`. > > **What is BV?** The "Smart Mode" option uses `bv` (beads viewer)—a graph-aware triage engine. Instead of Claude parsing JSONL and guessing priorities, `bv --robot-triage` computes PageRank, critical paths, betweenness centrality, and cycle detection to deterministically recommend the highest-impact task. The `selectionReason` in your prompt template explains *why* that task was chosen. It also installs three skills to `~/.claude/skills/` as slash commands: - `/ralph-tui-prd` — Create product requirement documents interactively - `/ralph-tui-create-json` — Convert PRD to `prd.json` format - `/ralph-tui-create-beads` — Convert PRD to Beads issue tracking format > **What are skills?** Skills are specialized knowledge modules that Claude loads automatically when you invoke them via slash commands. They extend Claude's capabilities with domain-specific prompts and workflows. See [Mastery Part 5: Skills](/blog/claude-code-mastery-05-skills) for details on creating and using skills. **First launch verification:** [code block] You should see version information (0.3.0 or higher). --- ## Creating PRDs with Ralph TUI Before Ralph can build autonomously, it needs a clear specification. The `/ralph-tui-prd` skill transforms your rough idea into a structured Product Requirements Document through an interactive conversation. ### Step 1: Describe Your Feature Run `/ralph-tui-prd` and describe what you want to build in plain language: ![PRD Creator Initial Prompt](/assets/blog/ralph-tui-prd-initial.png) You don't need a formal spec—just explain your feature like you would to a colleague. Include context about your existing setup, target users, and any constraints. ### Step 2: Review the PRD Preview After asking clarifying questions, the skill generates a PRD preview: ![PRD Creator Preview](/assets/blog/ralph-tui-prd-preview.png) ### Step 3: Choose Output Format The skill asks which format you want—JSON file or Beads issues—then automatically converts the PRD. Once complete, Ralph TUI shows the interface for running the autonomous loop. --- ## How Ralph TUI Manages Context Like the bash loop approach covered in [Part 2](/blog/ralph-wiggum-part-2-loop-setup), Ralph TUI starts a **fresh context window each iteration**. This is the core insight that makes long-running autonomous builds possible. ### Why Fresh Context Matters Standard agent loops suffer from context accumulation—every failed attempt stays in the conversation history. After a few iterations, the model processes a long history of noise before focusing on the current task. Ralph TUI solves this by spawning a new agent instance each cycle. Progress persists in files and git, not in the LLM's context window. When context fills up, you get a fresh agent with fresh context. ### Key Files Ralph TUI uses these files to maintain state across iterations: - `prd.json` — Task definitions and status - `.ralph-tui/progress.md` — Cross-iteration context summary - `.ralph-tui-session.json` — Session state for pause/resume - `.ralph-tui/config.toml` — Project configuration - `.ralph-tui/iterations/` — Iteration logs (`iteration-{N}-{taskId}.log`) ### The Execution Cycle Each iteration follows this pattern: 1. **Select task** — Pick highest-priority incomplete task from `prd.json` 2. **Build prompt** — Render Handlebars template with task context 3. **Execute agent** — Spawn fresh Claude instance with clean context 4. **Detect completion** — Parse output for task completion signals 5. **Update tracker** — Mark task complete, log iteration, loop This architecture means each PRD item should be small enough to complete in one context window. If a task is too large, break it into subtasks. ![Ralph TUI Running](/assets/blog/ralph-tui-running.png) --- ## Core Features Deep Dive Ralph-TUI provides five core capabilities that transform how you interact with autonomous loops: ### 1. Real-Time Visibility with Keyboard Controls Watch agent output live as Ralph executes. Navigate through logs, scroll back to see previous iterations, and jump to specific tasks—all without interrupting the autonomous loop. **Why it matters:** Long builds generate thousands of log lines. Ralph-TUI filters noise and highlights critical events (test failures, commits, task transitions) so you can focus on what matters. ### 2. Task Orchestration Ralph-TUI automatically displays: - Which task is currently executing - Task priority (based on `IMPLEMENTATION_PLAN.md`) - Tasks completed vs. pending - Estimated progress percentage This answers the question: "Where are we in the build?" ### 3. Session Persistence (Pause/Resume) Need to stop monitoring but keep Ralph running? Ralph-TUI sessions persist. Close the terminal, grab lunch, resume later—progress tracking continues. **Use case:** Start a 6-hour build, monitor for 30 minutes, close Ralph-TUI, check back later. The session shows everything that happened while you were away. ### 4. Subagent Tracing When Ralph spawns subagents (for testing, linting, or subtasks), Ralph-TUI traces the call stack. See which subagent is active, what it's working on, and when it returns to the main loop. > **What are subagents?** Subagents are specialized AI workers with independent context windows that Claude spawns to handle specific tasks in parallel. They can run tests, lint code, or tackle subtasks without consuming the main agent's context. See [Mastery Part 6: Subagents](/blog/claude-code-mastery-06-subagents) for a complete explanation. **Why it matters:** Complex builds use 5-10 subagents. Without tracing, you lose visibility into nested execution. ### 5. Cross-Iteration Context Tracking Ralph-TUI maintains context between iterations. See: - What changed between iteration 10 and iteration 11 - Which files were modified in each iteration - Test results across iterations (did the same test fail 3 times?) This turns a stream of events into a coherent narrative. --- ## Common Use Cases ### Use Case 1: Long-Running Builds (3+ Hours) **Scenario:** Migrating a legacy codebase from CommonJS to ESM with 200+ files, updating imports, fixing type errors, and ensuring all tests pass. **Ralph-TUI workflow:** 1. Start Ralph loop at 2 PM 2. Launch Ralph-TUI in a separate tmux pane 3. Monitor for 30 minutes to ensure Ralph understands specs correctly 4. Detach from tmux (`Ctrl+B, d`) 5. Check back at 6 PM via `tmux attach` 6. Review footer status: "12 completed | 1 in progress | 2 pending" 7. Export logs and verify test results **Why Ralph-TUI helps:** Without it, you'd have no idea if Ralph got stuck at task 3 or completed all tasks successfully until you manually inspect files. ### Use Case 2: Multi-Day Projects **Scenario:** Building a complex microservices architecture over 2 days, 50+ tasks. **Ralph-TUI workflow:** 1. Day 1, 9 AM: Start Ralph, monitor with Ralph-TUI 2. Day 1, 6 PM: Ralph is at task 23/50. Export logs, close Ralph-TUI, leave Ralph running 3. Day 2, 9 AM: Launch Ralph-TUI again—session shows progress (tasks 24-38 completed) 4. Day 2, 12 PM: Ralph finishes. Review complete log ### Use Case 3: CI/CD Integration **Scenario:** Running Ralph in a GitHub Actions workflow or remote server for automated feature builds. Ralph TUI supports **headless mode** for CI/CD pipelines: [code block] This executes the agent autonomously without interactive controls. **Remote monitoring:** You can monitor headless instances from your local machine using Ralph TUI's remote management: ```bash # On remote/CI server ralph-tui run --listen --prd ./prd.json # On your local machine ralph-tui remote add ci server.example.com:7890 --token --- --- # Getting Started with Ralph Wiggum Part 4: Advanced Patterns & Troubleshooting URL: /blog/ralph-wiggum-part-4-advanced-troubleshooting Published: 2026-01-18 Author: Jo Vinkenroye Tags: Claude Code, AI, Automation, Developer Tools, Productivity, Ralph Wiggum, Advanced Series: Getting Started with Ralph Wiggum (Part 4 of 4) --- Advanced Ralph Wiggum techniques with expert prompt patterns, comprehensive troubleshooting strategies, and enterprise-grade implementations. You've learned the fundamentals and mastered the methodology. Now let's dive into advanced techniques that separate hobbyists from professionals. This is your advanced playbook—techniques for complex scenarios, comprehensive troubleshooting, and enterprise-grade patterns that will make you a Ralph expert. ## Advanced Prompt Engineering ### The Constraint Sandwich Pattern One of the most effective patterns for Ralph prompts structures constraints around the task: [code block] This pattern works because it guides Ralph's thinking in the right order, as documented in [11 Tips for AI Coding with Ralph Wiggum](https://www.aihero.dev/tips-for-ai-coding-with-ralph-wiggum). > **See real examples:** Browse production prompt files from the community: > - [snarktank/ralph](https://github.com/snarktank/ralph/blob/main/prompt.md) — Complete prompt.md with constraint structure > - [ClaytonFarr/ralph-playbook](https://github.com/ClaytonFarr/ralph-playbook) — PROMPT_plan.md and PROMPT_build.md templates > - [peteristhegreat's gist](https://gist.github.com/peteristhegreat/31e7114805e24b9e38084772e2e7cf46) — Ralph coding agent setup ### The Socratic Prompting Technique Instead of telling Ralph exactly what to do, ask it questions that lead to better solutions: **Weak prompt:** [code block] **Strong prompt (Socratic):** [code block] This forces Ralph to think through the problem systematically, as emphasized in [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/). ### The Escape Hatch Pattern Always give Ralph a way out if it's stuck: [code block] ### The Learning Accumulation Pattern Structure progress.txt to accumulate knowledge: [code block] This creates a knowledge base that Ralph references, preventing repeated mistakes. ## Advanced File Organization ### Multi-Mode Project Structure For complex projects using all three phases: [code block] ### The Checkpoint System Create checkpoints for long-running projects: [code block] ## Setting Up PRDs for Ralph A well-structured PRD is critical for Ralph's success. This is an alternative to the `specs/*.md` + `IMPLEMENTATION_PLAN.md` approach—some teams prefer JSON for machine readability. > **Tools to help:** [Ralph TUI](https://github.com/subsy/ralph-tui) includes `/ralph-tui-prd` to create PRDs interactively and `/ralph-tui-create-json` to convert them to JSON. [snarktank/ralph](https://github.com/snarktank/ralph) offers PRD-driven task management with automatic branching and flowchart visualization. ### prd.json Template [code block] ### Key PRD Principles **Binary Pass/Fail Criteria**: Each task needs automated verification. As [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) emphasizes: "Make it better" isn't testable—"All tests pass with 80%+ coverage" is. **Atomic Tasks**: If a task requires 500+ lines of code, break it down. Each story should complete in 2-3 iterations. **The `passes` Field**: Ralph updates this to `true` when complete. The loop continues until all tasks pass. **Test Requirements**: Every story should specify how to verify completion automatically. No manual verification steps. ## Common Pitfalls and How to Avoid Them ### Starting Too Ambitious **Mistake:** Running 50 iterations on your first Ralph project. **Fix:** Start with 10-20 iterations to understand costs and behavior. As documented in [community tips](https://www.aihero.dev/tips-for-ai-coding-with-ralph-wiggum), a 50-iteration loop can cost $50-100+. ### Vague Completion Criteria **Mistake:** "Make the app faster" or "Improve the UI" **Fix:** Use specific, testable criteria: - ✅ "Reduce API response time to under 200ms (verified by load tests)" - ✅ "All Lighthouse scores above 90" - ✅ "Test coverage above 80% on all modules" ### No Automated Verification **Mistake:** Tasks that require human judgment like "make it look good" **Fix:** Ralph needs binary pass/fail conditions. If you can't write an automated test for it, Ralph can't verify it. As [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) states: *"Backpressure beats direction."* ### Tasks Too Large **Mistake:** "Build entire authentication system" as one task **Fix:** Break into smaller stories: - S001: User registration endpoint - S002: Login endpoint with JWT - S003: Token refresh mechanism - S004: Password reset flow - S005: Email verification ### Ignoring Context Limits **Mistake:** Letting Ralph run indefinitely without fresh context **Fix:** Use the Bash loop method instead of the plugin for long-running projects—each iteration gets a fresh context window. This is a key insight from [Geoffrey Huntley's guide](https://github.com/ghuntley/how-to-ralph-wiggum). **When to use which method:** - **< 20 iterations**: Plugin (`/ralph-loop`) is fine—simpler setup, context stays manageable - **20-40 iterations**: Either works; bash loop preferred for consistency - **> 40 iterations**: Bash loop required—prevents context degradation and hallucination The plugin runs everything in a single context window, which fills up over time. The bash loop method launches a fresh Claude instance per iteration, with only the codebase state carrying over. ### No Cost Monitoring **Mistake:** Not tracking API spending during development **Fix:** Set billing alerts and start with low iteration counts. Monitor costs per iteration. Track your spending at https://console.anthropic.com ### Wrong Task Types **Good Ralph tasks:** - Migrating tests from Jest to Vitest - Adding CRUD endpoints with tests - Implementing well-specified features - Refactoring with existing test coverage **Bad Ralph tasks:** - "Figure out why the app is slow" (exploration) - "Make the UI prettier" (subjective) - "Fix this weird bug" (requires deep debugging context) - UX decisions requiring aesthetic judgment ### The Thrashing Problem **Symptom:** Ralph gets stuck in a loop—same error, same fix attempt, same failure. **Solutions:** - Set `--max-iterations` to limit damage - Review your tests—are they too strict or unclear? - Break the task into smaller, more atomic pieces - Add explicit debugging steps to your prompt - Check if dependencies are properly installed ## Comprehensive Troubleshooting Guide ### Problem: Ralph Keeps Making the Same Mistake **Symptoms:** - Same error across multiple iterations - Tests fail with identical message - Ralph tries same approach repeatedly **Root Causes:** 1. Test is ambiguous or incorrectly written 2. Prompt doesn't include error feedback 3. Ralph lacks context about why approach fails **Solutions:** **Fix 1: Update the test** [code block] **Fix 2: Add error feedback to prompt** [code block] **Fix 3: Add explicit debugging steps** [code block] **Fix 4: Use the Escape Hatch Pattern** If Ralph keeps failing after multiple attempts, use the [Escape Hatch Pattern](#the-escape-hatch-pattern) to document the blocker and move on to the next task instead of spinning indefinitely. ### Problem: Ralph Generates Insecure Code **Symptoms:** - Passwords stored in plaintext - SQL injection vulnerabilities - Missing authentication checks - CORS set to "*" **Prevention:** Add security checklist to prompt: [code block] ### Problem: Context Window Exhaustion **Symptoms:** - Ralph starts ignoring parts of prompts - Quality degrades after iteration 30-40 - Ralph stops following constraints As [JeredBlu's guide](https://github.com/JeredBlu/guides/blob/main/Ralph_Wiggum_Guide.md) explains, this is why the bash loop method with fresh context per iteration is "fundamentally better for long-running tasks." **Solutions:** **Solution 1: Use Bash Loop Method** [code block] **Solution 2: Context Compression Prompt** > **About `/compact`:** The `/compact` command compresses your conversation context, letting you continue working without losing important details. Use it proactively before hitting context limits. See [Mastery Part 2](/blog/claude-code-mastery-02-mental-model#compact--use-with-caution) for when to use `/compact` vs `/clear`. [code block] **Solution 3: Split Long Sessions** [code block] ### Problem: Ralph Won't Stop (Thrashing) **Symptoms:** - Hits max-iterations without completing - Makes changes, reverts them, repeats - Progress.txt shows circular logic For deeper analysis of Ralph's decision-making when it thrashes, [Braintrust's debugging guide](https://www.braintrust.dev/blog/ralph-wiggum-debugging) shows how to use LLM observability tools to understand what's happening. **Diagnosis:** Check progress.txt for patterns: [code block] **Solutions:** **Solution 1: Add attempt tracking** [code block] **Solution 2: Simplify acceptance criteria** [code block] ### Problem: Test Coverage Drops Over Time **Symptoms:** - Early tasks have great tests - Later tasks have minimal tests - Coverage below target **Root Cause:** Ralph prioritizes shipping over testing when not enforced. **Solution:** Add test coverage gate to prompt: [code block] ### Problem: Ralph Ignores Existing Patterns **Symptoms:** - New code uses different patterns than existing code - Inconsistent file structure - Multiple ways to do the same thing **Solution:** Add pattern documentation to your prompt. Include your project's actual file structure and naming conventions so Ralph follows them consistently: [code block] ## Ralph-TUI Advanced Configuration Ralph-TUI supports customization via a config file at `~/.ralph-tui/config.json`. ### Custom Task Priorities Override default priority sorting: [code block] Ralph-TUI displays tasks in this order, even if `IMPLEMENTATION_PLAN.md` lists them differently. ### Output Filtering Filter log lines by keyword or regex: [code block] This hides noisy lines (file reads) and highlights important events (test results, task completion). ### Export Formats Choose log [code block] Options: `txt`, `markdown`, `json`, `html` **Markdown export** generates: [code block] ### Integration with Other Tools Send Ralph-TUI events to external systems: [code block] When Ralph completes a task, Ralph-TUI POSTs to your webhook: [code block] **Use cases:** - Update project management tools (Linear, Jira) - Send Slack notifications - Trigger CI/CD pipelines on task completion ## Ralph-TUI Troubleshooting ### Ralph-TUI Not Detecting Ralph Loop **Symptom:** `ralph-tui run` shows "Waiting for Ralph loop..." indefinitely. **Cause:** Ralph-TUI looks for a running `ralph` process in the current directory. If you started Ralph in a different directory, Ralph-TUI won't find it. **Solution:** 1. Run `ps aux | grep ralph` to find the Ralph process 2. Note the working directory from `lsof -p ` 3. `cd` to that directory and run `ralph-tui run` again **Alternative:** Explicitly specify the Ralph process ID: [code block] ### Session Lost/Corrupted **Symptom:** Ralph-TUI shows "Session data corrupted" on startup. **Cause:** Ralph-TUI stores session state in `~/.ralph-tui/sessions/.json`. If Ralph crashes mid-iteration, the session file may be incomplete. **Solution:** 1. Delete the corrupted session: `rm ~/.ralph-tui/sessions/my-project.json` 2. Restart Ralph-TUI: `ralph-tui run` Ralph-TUI creates a fresh session, but you lose historical context (previous iterations won't show in logs). **Prevention:** Enable session backups in config: [code block] This creates backups every 5 minutes in `~/.ralph-tui/sessions/backups/`. ### Performance Issues with Large Logs **Symptom:** Ralph-TUI becomes slow or unresponsive after several hours of monitoring. **Cause:** Log buffer grows to 100,000+ lines, slowing down rendering. **Solution:** 1. Export logs periodically: press `e` every hour 2. Enable log rotation in config: [code block] When logs hit 10,000 lines, Ralph-TUI archives old logs to `~/.ralph-tui/archives/` and clears the buffer. **Alternative:** Increase terminal buffer size if your terminal supports it (e.g., iTerm2 → Preferences → Profiles → Terminal → Scrollback lines). ### Port Conflicts **Symptom:** `ralph-tui run` fails with "Port 9876 already in use". **Cause:** Ralph-TUI uses port 9876 for internal communication. Another process is using it. **Solution:** 1. Find the conflicting process: `lsof -i :9876` 2. Kill it or use a different port: [code block] **Permanent fix:** Set default port in config: [code block] ## Headless Visual Feedback with Playwright MCP For bash loop runs without a visible browser, [JeredBlu recommends](https://github.com/JeredBlu/guides/blob/main/Ralph_Wiggum_Guide.md) Playwright MCP for visual verification. > **What is MCP?** Model Context Protocol (MCP) is an open standard that lets Claude connect to external services—databases, APIs, browser automation, and more. MCP servers extend Claude's capabilities beyond text processing. See [Mastery Part 7: MCP Servers](/blog/claude-code-mastery-07-mcp-servers) for setup and configuration. Create `.mcp.json` in your project root: [code block] Then reference it in your PROMPT_build.md: [code block] This gives you visual verification of Ralph's work without needing Claude for Chrome or a visible browser window. Screenshots accumulate in your project folder, providing a visual audit trail of what Ralph built. **When to use Playwright MCP vs Ralph-TUI:** - **Ralph-TUI:** Real-time log monitoring, task orchestration, keyboard controls - **Playwright MCP:** Headless visual verification, screenshot audit trails, CI/CD integration Both tools complement each other—use Ralph-TUI for live monitoring and Playwright MCP for visual verification. ## Enterprise Patterns ### Multi-Developer Ralph Coordination When multiple developers use Ralph on the same project, coordination becomes critical. Several tools exist for this: - [ralph-orchestrator](https://github.com/mikeyobrien/ralph-orchestrator) — Supports 7+ AI backends (Claude, Gemini, Copilot, etc.) with a persona/hat system for specialized tasks - [multi-agent-ralph-loop](https://github.com/alfredolopez80/multi-agent-ralph-loop) — Parallel workstream orchestration for running multiple Ralph instances simultaneously - [ralph-loop-agent](https://github.com/vercel-labs/ralph-loop-agent) — Vercel's TypeScript SDK wrapper for programmatic control **Pattern 1: Feature Branch Ralph** [code block] **Pattern 2: Shared Progress Tracking** [code block] ### Ralph + CI/CD Integration Automate Ralph runs in your pipeline: [code block] ## Choosing a Claude Plan for Ralph Ralph works with Claude subscriptions or API access. Here's the quick guide: - **Claude Pro ($20/mo)** — 10-30 iterations/session, good for learning and side projects - **Claude Max 5x ($100/mo)** — 50-150 iterations/session, ideal for daily development - **Claude Max 20x ($200/mo)** — 200-600+ iterations/session, professional long-running loops **Recommendation:** Start with Pro to learn the workflow. Upgrade to Max 5x once you're running Ralph daily—it's 5x the capacity for 5x the price, which is fair. Go Max 20x if you're doing client work or need extended autonomous sessions. > **The ROI math:** At $200/month for Max 20x, if Ralph saves you just 5 hours (at $40/hr billing), it's already paid for itself. Most serious users report 20-40+ hours saved monthly. ### Cost Management Tips - **Always set `--max-iterations`** — Your real safety net - **Start small** — 10-20 iterations until you understand the costs - **Focused prompts = fewer iterations** — Vague prompts burn tokens - **Track your usage** — Check at https://claude.ai/settings ## Conclusion You've now covered the advanced techniques that will help you get more out of Ralph. These techniques enable you to tackle complex, production-grade projects with confidence. You know how to prevent problems before they occur and fix them quickly when they do. For additional resources, explore [The Ralph Playbook](https://claytonfarr.github.io/ralph-playbook/) for comprehensive methodology documentation, [Geoffrey Huntley's original guide](https://github.com/ghuntley/how-to-ralph-wiggum) for the philosophy behind the technique, and [JeredBlu's practical guide](https://github.com/JeredBlu/guides/blob/main/Ralph_Wiggum_Guide.md) for copy-paste-ready configurations. --- --- # Ralph Wiggum: The AI Loop That's Revolutionizing Autonomous Coding URL: /blog/ralph-wiggum-autonomous-ai-coding Published: 2026-01-16 Author: Jo Vinkenroye Tags: Claude Code, AI, Automation, Developer Tools, Productivity --- Ship production code while you sleep. Learn how Ralph Wiggum enables autonomous AI coding loops that self-correct and iterate until done. Picture this: you push a complex feature request to Claude Code at 11 PM, close your laptop, and go to sleep. Eight hours later, you wake up to a fully implemented, tested, and committed solution. No babysitting. No prompt engineering gymnastics. Just results. That's not a fantasy—it's what developers using Ralph Wiggum are doing right now. And if you're still manually shepherding every AI interaction, you're leaving massive productivity gains on the table. Here's the uncomfortable truth: while you're carefully crafting prompts and reviewing every AI suggestion, other developers are shipping entire features autonomously. YC hackathon teams have built 6+ repositories during long-running sessions—for $297 in API costs. One developer completed a $50k contract for less than $300. Geoffrey Huntley ran a 3-month autonomous loop that built an entire programming language. The gap between developers who've figured this out and those who haven't is widening fast. This guide is your shortcut to the right side of that gap. > **New to Claude Code?** This guide assumes familiarity with Claude Code basics. If you're just getting started, read [Claude Code Mastery Part 1: Getting Started](/blog/claude-code-mastery-01-getting-started) first. ## From Clever Hack to Industry Standard What started as a bash script experiment has become an official Anthropic plugin. [Boris Cherny](https://github.com/anthropics/claude-code/tree/main/plugins/ralph-wiggum), Anthropic's Head of Claude Code, formalized it in summer 2025. By [early 2026](https://venturebeat.com/technology/how-ralph-wiggum-went-from-the-simpsons-to-the-biggest-name-in-ai-right-now), VentureBeat was calling it "the biggest name in AI right now." This represents a fundamental shift in how we work with AI: from "chatting" to managing autonomous sessions. ## The Professional Workflow Simple tasks? Just run `/ralph-loop` and let it rip. But for serious projects—the kind that used to take weeks—professionals use a structured three-phase approach: **Requirements → Planning → Building**. Define specs, generate a plan, then let Ralph build autonomously while you sleep. The full methodology is covered in [Part 2: The Three-Phase Methodology](/blog/ralph-wiggum-part-2-methodology). ## Who's Using This (And What They're Building) The developer community has gone all-in on Ralph for production work: **Startups** are building entire MVPs in long-running autonomous sessions. One YC team shipped their demo day prototype in a single Ralph session. **Solo developers** are completing contract work 10x faster. That $50k contract for $300 in API costs? Real story. **Teams** run parallel Ralph sessions on different features. Monday standup becomes "here's what Ralph shipped over the weekend." **Open source maintainers** automate the tedious stuff—migrating from React 16 to 19, converting CommonJS to ESM, adding TypeScript types to legacy codebases. ## Master Ralph Wiggum: The Complete Series I've put together everything you need to go from "what's Ralph?" to running long-running autonomous builds with confidence. Four parts, zero fluff, all actionable. ### The Series 1. **[Part 1: Introduction and Fundamentals](/blog/ralph-wiggum-part-1-introduction)** — Installation, core concepts, when to use it (and when not to) 2. **[Part 2: The Three-Phase Methodology](/blog/ralph-wiggum-part-2-methodology)** — The professional workflow for multi-day autonomous projects 3. **[Part 3: Ralph TUI Monitoring](/blog/ralph-wiggum-part-3-ralph-tui-monitoring)** — Real-time visibility, keyboard controls, session management 4. **[Part 4: Advanced Patterns & Troubleshooting](/blog/ralph-wiggum-part-4-advanced-troubleshooting)** — Expert patterns, debugging stuck loops, enterprise techniques ## The Uncomfortable Question Here's what you need to ask yourself: how much longer can you afford to manually babysit every AI interaction? The developers using Ralph aren't working harder—they're working smarter. They define requirements clearly, set up the loop, and let it run. They wake up to working code instead of spending their mornings writing it. **For solo developers:** This is how you compete with teams. Take on bigger contracts. Ship side projects that would otherwise never get finished. **For teams:** This is how you ship faster without burning out. Let Ralph handle the mechanical work while humans focus on architecture and product decisions. **For startups:** This is how you move at the speed investors expect. Validate ideas in days instead of weeks. Ship MVPs while your competitors are still in planning meetings. ## Get Started Now The learning curve is smaller than you think. Install the plugin, run your first loop on something small, and experience the shift firsthand. [code block] Then try something simple: [code block] Watch it work. Review the results. Then scale up. The gap is widening. The question isn't whether autonomous AI coding will become standard—it's whether you'll be ahead of the curve or playing catch-up. --- **Ready to dive deep?** Start with [Part 1: Introduction and Fundamentals](/blog/ralph-wiggum-part-1-introduction). --- --- # Claude Code Mastery Part 1: Getting Started URL: /blog/claude-code-mastery-01-getting-started Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Developer Tools, CLI, Productivity Series: Claude Code Mastery (Part 1 of 10) --- Get Claude Code installed and running your first commands in under 15 minutes. Learn installation, authentication, essential commands, and the mindset shift that makes AI-assisted coding actually work. Picture this: you're staring at a massive legacy codebase, hundreds of files deep, and someone asks you to "just add a small feature." You could spend hours tracing imports, understanding patterns, and figuring out where everything connects. Or you could type `claude` in your terminal and ask it to explain the architecture in plain English. That's the promise of Claude Code—an AI pair programmer that doesn't just autocomplete your code, but actually *understands* your entire project and can execute tasks autonomously. Let's get you set up. ## The Productivity Gap Is Real Here's something nobody wants to say out loud: if you're not using tools like Claude Code, you're already falling behind. The developers who've figured this out are operating at 10-100x the productivity of those who haven't. That's not hype. That's what happens when you can ship entire features in an hour instead of a day, when you can refactor architecture in minutes instead of weeks, when you can debug complex issues by having an AI read thousands of files in seconds. While some developers are manually tracing through codebases, others are already done and moving to the next task. While some are reading documentation, others have shipped. While some are debugging line by line, others have solved it and moved on. This guide is everything I learned in a year of daily Claude Code use. Every hack that saved me hours. Every workflow that multiplied my output. Every mindset shift that made the difference between frustration and flow. If you're a solo developer, this will change your career. If you're leading a team, this will change your entire organization. The gap is widening fast. Let's make sure you're on the right side of it. ## What Exactly is Claude Code? Claude Code is Anthropic's agentic coding assistant that lives in your terminal. Unlike traditional code completion tools that suggest the next few characters, Claude Code can: - **Navigate your entire codebase** - It reads, understands, and connects the dots across hundreds of files - **Execute multi-step tasks** - "Refactor this module to use TypeScript" isn't just a suggestion; Claude actually does it - **Handle git workflows** - Commits, branches, PR descriptions—all through natural language - **Run anywhere** - Terminal, VS Code, JetBrains, or even directly on GitHub via @claude mentions Think of it less like an autocomplete and more like a very eager junior developer who types 1000x faster than you, can read your entire codebase in seconds, and has infinite patience for explaining things. Here's the catch: LLMs will happily barrel forward without any guardrails if you let them. The developers who get the most out of Claude Code aren't just good at prompting—they're good at setting up structure that channels all that speed into something useful instead of chaotic. ## Before You Install You'll need one of these two things: **Option A: Claude Pro or Max Subscription** If you're already paying for Claude Pro ($20/month) or Max ($100/month), you're covered. Claude Code usage is included, and you authenticate by simply logging into your Claude account. **Option B: Anthropic API Key** Prefer pay-as-you-go? Create an account at [console.anthropic.com](https://console.anthropic.com), add billing, and generate an API key. You pay only for what you use. > **Pro tip:** If you have *both* an API key set as an environment variable AND a Claude subscription, the API key takes priority—meaning you'll be charged per-token instead of using your subscription. Run `/status` inside Claude Code to verify which authentication method is active. **System Requirements:** - Node.js 18 or higher (check with `node --version`) - macOS, Linux, or Windows ## Installation The npm installation method is deprecated. Here are the official ways to install Claude Code: **macOS or Linux (Recommended):** [code block] **Homebrew (macOS/Linux):** [code block] **Windows (PowerShell):** [code block] **Windows (WinGet):** [code block] After installation, Claude Code auto-updates in the background. Homebrew and WinGet users need to update manually with `brew upgrade claude-code` or `winget upgrade Anthropic.ClaudeCode`. ## First Launch Navigate to any project directory and run: [code block] What happens next depends on your setup: 1. **Browser opens** for authentication 2. **Log in** with your Claude account or Anthropic console credentials 3. **Authorize** Claude Code to access your account 4. A dedicated **"Claude Code" workspace** gets created automatically for usage tracking That's it. You're in. ## The First Thing You Should Do Run this command immediately: [code block] This installs terminal shortcuts that let you use `Shift + Enter` for multi-line input. Without it, you're stuck typing everything on a single line—which gets painful fast when you're writing detailed prompts. > **Why this matters:** Imagine trying to explain a complex bug in one line versus being able to structure your thoughts across multiple paragraphs. Multi-line input transforms how effectively you can communicate with Claude. ## Two Ways to Work ### Interactive Mode (Your Daily Driver) [code block] Opens a REPL session where you have a back-and-forth conversation. Claude maintains context throughout, remembers what you discussed, and can handle complex multi-step tasks. **Use this for:** - Building new features - Debugging tricky issues - Code reviews and refactoring - Exploring unfamiliar codebases #### Plan Mode (Think Before Acting) Press `Shift+Tab` twice before sending your message to enter Plan Mode. In this read-only state, Claude can explore your codebase, research approaches, and propose solutions—but can't modify any files. **Use this for:** - Architectural decisions before implementation - Understanding unfamiliar code - Evaluating different approaches to a problem - Complex refactoring where you want to see the plan first Plan Mode is covered in depth in [Part 2: The Mental Model](/blog/claude-code-mastery-02-mental-model#plan-mode-think-before-acting). ### One-Shot Mode (Quick Questions) [code block] Runs a single command and exits. No interactive session, just question → answer → done. **Use this for:** - Quick lookups: `claude -p "what does the auth middleware do?"` - Automated scripts: `claude -p "list all TODO comments" >> todos.txt` - CI/CD pipelines: `claude -p "generate changelog from recent commits"` ## Essential Commands Once you're inside an interactive session, these slash commands control Claude's behavior: **`/help`** — Shows all available commands including custom ones. Use when you forget a command or want to discover new features. **`/clear`** — Wipes conversation history. **Use often!** Start fresh between unrelated tasks to save tokens. **`/compact`** — Compresses conversation context. Use when context is getting long but you want to keep some history. **`/model`** — Switch between Opus, Sonnet, or Haiku. Opus for complex reasoning, Haiku for quick tasks, Sonnet for balance. **`/config`** — Opens settings configuration. Use for adjusting permissions, defaults, and preferences. **`/status`** — Shows current status and auth method. Use to verify which account/API key is being used. **`/cost`** — Displays token usage. Keep an eye on consumption. **`/context`** — View what's in Claude's current context. Helps you understand what Claude "knows" right now. **`/vim`** — Enable vim-style editing. If you're a vim user, you'll feel right at home. **`/init`** — Initialize or update CLAUDE.md. Run after major features or refactors to keep project context current. ### Session Management Commands **`/terminal-setup`** — Install Shift+Enter shortcut for multi-line input. **`/allowed-tools`** — Configure which tools Claude can use (file access, bash, etc.). **`/hooks`** — Set up automation hooks for certain events. **`/mcp`** — Manage Model Context Protocol servers. **`/agents`** — Create and manage subagents for parallel tasks. **`/install-github-app`** — Set up GitHub Actions integration for @claude mentions. ## Session Management from the Terminal Sometimes you need to continue where you left off: [code block] The `-r` flag is particularly useful when you realize "wait, I was working on that auth bug two days ago and Claude had figured something out." You can jump back into that exact conversation. ## Your First Real Task Here's a great way to test your setup. Navigate to any project and run: [code block] Then type: [code block] Claude will scan your codebase, identify patterns, and give you a structured overview. This is genuinely one of the best ways to onboard to any unfamiliar project—whether it's a new job, an open-source contribution, or your own code from six months ago. ## The Permission System Claude doesn't just silently modify your files. Before making changes, you'll see prompts like: [code block] Your options: - **`y`** - Allow this specific action - **`n`** - Block it - **`a`** - Always allow this type of action (no more prompts for similar operations) **My recommendation:** Start with `y` for everything until you understand Claude's behavior patterns. Once you trust it for certain operations (like editing test files), switch to `a` to speed up your workflow. > **Advanced permissions:** For fine-grained control with allow/deny lists, automated hooks, and team-wide permission policies, see [Settings Configuration in Part 3](/blog/claude-code-mastery-03-project-configuration#settingsjson-permissions-and-automation). ## IDE Integration Claude Code works natively with several IDEs: **VS Code** — Search "Claude Code" in Extensions (Cmd/Ctrl+Shift+X). Full integration with inline diffs and multi-tab conversations. **Cursor** — Manual VSIX installation may be required. Based on VS Code, works great once installed. **Windsurf** — Similar to Cursor setup. VS Code fork that supports VSIX extensions. **JetBrains** — Plugin available. Works with IntelliJ, WebStorm, PyCharm, and other JetBrains IDEs. > **Important:** The VS Code extension isn't a replacement for the CLI—it's a launcher that provides a nicer interface. The actual Claude Code still runs as the same engine under the hood. You can have multiple instances running in parallel across different parts of your codebase. ## The Mindset Shift: Structure Enables Speed Here's what separates developers who love Claude Code from those who find it frustrating: **the ones who succeed treat it like collaborating with a very capable but very eager junior developer**. That junior dev types incredibly fast, never gets tired, and can absorb your entire codebase in seconds. But they'll also happily charge forward without guardrails if you let them. The practices that make AI-assisted coding work aren't new—they're the same things experienced developers have done for years. They just become *critical* now. ### Git Is Your Safety Net Commit frequently, in small chunks. If Claude goes off the rails, you can just `git checkout .` or revert to a known good state. Think of commits like save points in a video game. This sounds basic, but it changes everything. When you know you can always roll back, you're free to let Claude experiment. When you're not sure, you hesitate, micromanage, and lose most of the speed advantage. ![I'll just make one more change before committing - This Is Fine](/assets/blog/this-is-fine-meme.jpg) If you're rusty on Git, the [official GitHub docs](https://docs.github.com/en/get-started) are genuinely excellent for getting started. ### Plan Before You Prompt Write out what you're building *before* you start prompting. A simple markdown doc with your goals, architecture decisions, and constraints works wonders. Keep it in your repo so Claude can reference it. And seriously—**use [Plan Mode](/blog/claude-code-mastery-02-mental-model#plan-mode-think-before-acting)**. It's literally in the name. When you ask Claude to plan something, it asks you clarifying questions, then proposes a plan and gives you the option to accept it. If you're not satisfied for *any* reason, say no and tell Claude what you'd like changed. Iterate as much as you need. This forces *you* to think, and gives the LLM crucial context. `Shift+Tab` twice, then "design a login form" is infinitely better than "build me a login form." ### Break Big Tasks Into Issues Instead of "hey Claude, build me a user auth system," break it into discrete pieces: "Create User model," "Add session handling," "Build login form." One PR per issue. Create GitHub issue templates (feature request, bug report—these two cover 90% of cases) and check them into source control. GitHub automatically uses them when anyone creates an issue through the UI. Once you have a solid plan and understand it, ask Claude to create a GitHub issue using your template. Or if you've got a big plan, ask Claude to break it into smaller, manageable chunks, then turn the pieces you're comfortable with into issues. ### Tests First (Yes, Really) Ask Claude to write tests before implementation. Sounds counterintuitive, but it forces the LLM to think clearly about what "done" looks like—and gives you a safety net. If tests don't pass, something's off with the implementation. This is called TDD (test-driven development), or "going from red to green" based on the typical CLI colors for failing vs. Passing tests. It's not just good practice—it's how you keep Claude honest. ### Linters Catch "Creative" Choices ESLint, Prettier, Rubocop, markdownlint, yamllint—whatever fits your stack. Set them up with Git pre-commit hooks. This catches a lot of the stylistic choices LLMs make that don't match your codebase. Example: Claude *loves* to skip the empty line between a header and a bulleted list in markdown files. No idea why. But standard practice is to include it. So you install markdownlint, set your rule preferences in a config file, run it on any markdown files Claude touches, and problem solved. ### CLAUDE.md: Your Project's Instruction Manual Create a `CLAUDE.md` file in your project root with your coding standards, preferences, and project context. Claude reads it automatically at the start of every session. Don't make it too big though. LLMs have surprisingly poor short-term memory compared to the random stuff they can seemingly pull out of thin air about events from 300 years ago (thanks, training data). Keep it focused on your preferences and key project details. Run `/init` after any large feature implementation or refactor to keep CLAUDE.md current. Claude will tell you if there's anything to add or update, or just say "looks good, no changes necessary." ## Common Beginner Mistakes (And How to Avoid Them) **1. Skipping `/terminal-setup`** You'll wonder why your detailed prompts feel cramped. Just run it. **2. Never using `/clear`** Every message you send includes all previous context. After finishing a task, clear the slate. Your token bill will thank you. **3. Being too vague** "Fix the bug" vs "Fix the null pointer exception in `auth.ts` at line 42 where `user.id` is accessed before the null check"—one of these gets you better results. **4. Asking Claude to change code it hasn't read** Always let Claude analyze first. "Read the auth module and explain how sessions work" before "rewrite the session handling." **5. Dumping massive tasks in one prompt** "Rewrite the entire application in TypeScript" is overwhelming. Break it down: "Convert the user module to TypeScript, then we'll do auth next." **6. Not checking `/status`** If you're wondering why you're getting billed when you have a subscription, check which authentication method is active. **7. Skipping [Plan Mode](/blog/claude-code-mastery-02-mental-model#plan-mode-think-before-acting)** "Just build it" feels faster. It's not. `Shift+Tab` twice and take five minutes to plan—it saves hours of undoing work that went in the wrong direction. **8. No safety net** If you're not committing frequently, you're one bad generation away from a frustrating afternoon. Commit before big changes. Commit after successful changes. Commits are free. ## What's Next You now have Claude Code installed and understand not just the commands, but the *mindset* that makes AI-assisted coding work. Structure and guardrails aren't constraints—they're what make the speed actually useful instead of chaotic. In [Part 2: The Mental Model](/blog/claude-code-mastery-02-mental-model), we'll dive into how Claude Code actually "thinks"—understanding context windows, tool usage, and how to structure your prompts to get consistently better results. This knowledge transforms Claude from a sometimes-helpful assistant into a reliable engineering partner. --- --- # Claude Code Mastery Part 2: The Mental Model URL: /blog/claude-code-mastery-02-mental-model Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Developer Tools, Productivity, Best Practices Series: Claude Code Mastery (Part 2 of 10) --- Understanding how Claude Code thinks will transform your results. Learn the agentic loop architecture, context management, and effective prompting patterns for AI pair programming. Here's a truth that separates developers who love Claude Code from those who find it frustrating: **understanding how it thinks changes everything**. Most people approach Claude Code like a search engine—type a question, get an answer. But Claude Code isn't a search engine. It's an autonomous agent running in a loop, making decisions, using tools, and adjusting based on results. Once you understand this architecture, you'll know exactly why certain prompts work brilliantly while others fall flat. ## The Agentic Loop: How Claude Code Actually Works Under the hood, Claude Code runs a deceptively simple pattern: [code block] Here's what that looks like in practice: 1. **Think** — Claude analyzes your request and decides what to do 2. **Act** — It uses a tool (read a file, run a command, edit code) 3. **Observe** — It sees the result of that action 4. **Correct** — It adjusts its approach based on what happened 5. **Repeat** — The loop continues until Claude decides the task is complete This isn't a linear conversation. It's a recursive loop that keeps running until Claude explicitly decides it's done. The default behavior is *continue until resolved*, not *respond once and stop*. **Why this matters:** When you ask Claude to "add authentication to this app," it doesn't just generate code and hand it to you. It reads your existing code, understands your patterns, writes the implementation, runs tests to verify it works, fixes any issues it finds, and only then reports back. That's the loop in action. ## The Context Window: Claude's Working Memory Claude has a "context window"—think of it as working memory. Everything Claude needs to know about your conversation, your codebase, and the current task has to fit in this window. **Standard context:** 200,000 tokens (roughly 500 pages of text) **Extended context:** Claude Sonnet 4 now supports up to 1 million tokens on the API—enough for entire codebases with 75,000+ lines of code. This requires tier 4 API access and costs 2x for input tokens beyond 200K. Here's the critical insight: **every message you send includes the entire conversation history**. That's how Claude "remembers" what you discussed earlier. But it also means long conversations consume exponentially more tokens. ### What Fills Up Your Context Context comes from multiple sources, and they all add up: **Your prompts** — Every message you've sent in this session **Claude's responses** — Everything Claude has said back **Tool results** — File contents Claude has read, command outputs, search results **System context** — CLAUDE.md contents, git status, project structure **Session history** — The full back-and-forth since your last `/clear` When you ask Claude to "read this file," the entire file contents get added to context. Ask it to read ten files, and you've potentially consumed thousands of tokens before Claude even starts working on your actual request. ### A Note on Subscription Plans How much you need to worry about context depends on your plan: **Max 5x / Max 20x plans** — You have ample token limits. Context management is still good practice for output quality, but you're not watching every token. **Pro plan / Pay-as-you-go API** — Every tool call, every file read, every verbose response costs you. Aggressive context management directly impacts your bill. The strategies below matter most for cost-conscious users, but even on unlimited plans, cleaner context produces better outputs. ## Managing Context: The Commands That Matter ### `/clear` — The Recommended Default [code block] Wipes everything. Fresh start. Zero context. Here's a perspective shift: **`/clear` should be your default, not your last resort**. Experienced Claude Code users clear aggressively—not because something went wrong, but as standard practice between tasks. **When to use it:** - Starting a new feature (always) - Between unrelated tasks (finished auth, now working on UI) - When less than 50% of your current context is relevant - When Claude seems confused or stuck in loops - After completing a feature - When you notice repetitive or off-track responses **The `/clear` + `/catchup` pattern:** Some developers create a custom `/catchup` command that makes Claude read all changed files in their git branch. After `/clear`, run `/catchup` to re-establish relevant context without the cruft. (See the [full implementation in Part 4](/blog/claude-code-mastery-04-custom-commands#10-catchup--context-restoration)) [code block] This gives you fresh context with exactly what's relevant. ### `/compact` — Use With Caution > **January 2026 Update:** Since v2.0.64, `/compact` is now **instant**—no more waiting. Also try `/context` to check if MCP servers are eating space before compacting. [code block] Instead of wiping everything, `/compact` creates an intelligent summary of your conversation and starts fresh with that summary as context. **The honest truth:** Many experienced users avoid `/compact` when possible. The automatic summarization is opaque—you don't know exactly what gets preserved or lost. ![/compact - We don't do that here](/assets/blog/we-dont-do-that-here-meme.jpg) **When `/compact` makes sense:** - You're mid-task and need to preserve specific decisions - You have complex context that would be painful to rebuild - You want to continue but context is at 70%+ **When to prefer `/clear` instead:** - Starting something new - Less than half your context is relevant - You can easily re-establish what matters **If you do use `/compact`, guide it:** [code block] This tells Claude what matters most in the summary. ### The "Document & Clear" Strategy For complex, multi-session tasks, there's a better pattern than trusting compaction: 1. **Document:** Have Claude dump its plan and progress into a markdown file 2. **Clear:** `/clear` the session completely 3. **Continue:** Start fresh by telling Claude to read the markdown and continue [code block] This creates **durable external memory** that survives sessions perfectly—no lossy summarization. > **January 2026 Update:** This pattern is now built into Plan Mode. When you accept a plan, Claude automatically clears context and loads the plan into a fresh window. This significantly improves plan adherence. You can opt out if you prefer to keep context. ### Auto-Compact: The Safety Net You Shouldn't Rely On > **January 2026 Update:** Auto-compact now triggers at **75% capacity** (not 92-95%), leaving ~50k tokens free for reasoning. This is a significant improvement. Claude Code automatically compacts when your context hits capacity. This prevents running out of space mid-task. **But don't over-rely on it.** Auto-compaction is still emergency behavior, not a strategy. By the time it triggers, you may have already experienced degraded output quality. Better approach: **clear proactively at natural breakpoints** rather than letting context bloat until the system intervenes. ### Extended Thinking: When Claude Needs to Think Harder Claude Code supports thinking mode triggers that allocate more reasoning budget: **`"think"`** — ~4,000 tokens thinking budget **`"think hard"`** — ~10,000 tokens thinking budget **`"ultrathink"`** — ~31,999 tokens thinking budget (maximum) Just include these words in your prompt: [code block] **When to use them:** - Architecture decisions → `ultrathink` + plan mode - Stuck in loops → `ultrathink` to break through - Complex debugging → `think hard` - Routine tasks → no keyword needed > **Note:** These keywords only work in Claude Code's terminal interface, not in the web chat or API. For a deeper dive into all the trigger words and latest updates, see [Part 9: Power User Secrets](/blog/claude-code-mastery-09-power-user-secrets#extended-thinking-the-real-story). ## Plan Mode: Think Before Acting Here's where most people go wrong: they ask Claude to *do* things without first asking it to *plan* things. Plan mode is a read-only state where Claude researches, analyzes, and proposes approaches without touching any files. It's invaluable for complex tasks. ### How to Enter Plan Mode Press `Shift+Tab` to cycle through modes: 1. **First press:** Auto-accept mode (Claude doesn't ask permission for each action) 2. **Second press:** Plan mode (Claude can only read, not write) 3. **Third press:** Back to normal edit mode Look at the prompt at the bottom of your terminal to see which mode you're in. ### What Claude Can Do in Plan Mode Plan mode restricts Claude to research tools only: - **Read** — View files and content - **LS/Glob** — List directories and search file patterns - **Grep** — Search content across files - **WebSearch/WebFetch** — Research online - **Task** — Spawn research sub-agents - **TodoRead/TodoWrite** — Manage task lists **What it cannot do:** Edit files, run commands, make any changes. ### When to Use Plan Mode **Exploring unfamiliar code:** [code block] Claude digs through your code and explains it without risk of breaking anything. **Planning a refactor:** [code block] Claude analyzes dependencies and proposes a migration path. **Understanding architecture:** [code block] Claude considers your existing code and proposes options. The key insight: **planning is cheap, undoing is expensive**. Five minutes in plan mode often saves hours of reverting bad implementations. ## The Junior Developer Analogy (Refined) In [Part 1](/blog/claude-code-mastery-01-getting-started), we introduced the "eager junior dev" mental model. Let's refine it now that you understand the architecture: Claude Code is like a highly skilled junior developer who: **Needs clear instructions** — Vague requests produce vague results because the agentic loop has no clear stopping condition **Benefits from context** — More relevant information means better tool selection and fewer wasted iterations **Works best with specific tasks** — "Add error handling to the login function" gives a clear completion state; "improve the code" doesn't **Learns from your project's patterns** — Claude observes conventions in files it reads and applies them to files it writes **Asks before making big changes** — The permission system exists because the loop would otherwise execute indefinitely **Can get stuck in loops** — If a task isn't well-defined, Claude might repeat similar actions without progress ## Effective Prompting Patterns Now that you understand the loop, here's how to write prompts that work with it: ### Be Specific About Completion The loop needs to know when to stop. [code block] ### Provide Context Upfront Every tool call consumes tokens and time. Help Claude find what it needs faster. [code block] > **Note:** On Max 5x/20x plans with generous token limits, you can afford to let Claude explore more freely. But even then, specific references produce faster, more consistent results. ### Break Down Large Tasks One long loop is worse than multiple short loops. [code block] ### Reference Files by Path Don't make Claude search for patterns it could find directly. [code block] ## When Claude Gets Stuck Signs the agentic loop is spinning without progress: - Repetitive responses (same suggestions over and over) - Circular logic (trying the same fix repeatedly) - Asking questions it already asked - Producing similar incorrect code in cycles ![When Claude keeps suggesting the same fix for the 5th time in a row](/assets/blog/confused-math-lady-meme.jpg) **Solutions, in order:** 1. **`/clear` and start fresh** — Most effective. A clean context often unsticks immediately. 2. **Use `ultrathink`** — "ultrathink about this problem" allocates maximum reasoning budget and often breaks through where normal prompts fail. 3. **Provide more specific context** — The loop might be searching broadly because it doesn't know where to look. 4. **Break the task smaller** — The completion condition might be too vague. 5. **Try a different framing** — "Instead of fixing this function, let's rewrite it from scratch." 6. **Switch to Opus** — If you're on Sonnet (Pro plan default), switching to Opus via `/model` can help with genuinely complex reasoning. Max plan users already have Opus as default. ## The Feedback Loop Effective Claude Code usage is iterative. Don't expect perfection on the first try. [code block] This isn't Claude failing—it's the agentic loop working as designed. Each iteration adds information that guides the next cycle. ## Trust Calibration Over Time Your relationship with Claude Code should evolve: **Week 1:** Review every change carefully. Understand what Claude does and why. **Week 2:** Allow auto-accept mode for safe operations (reading, test files, documentation). **Month 1:** Trust routine tasks. Verify complex refactors and anything touching critical paths. **Ongoing:** Always verify security-critical code, database operations, and anything with external effects. The goal isn't blind trust—it's informed trust. You're training your intuition for when Claude needs supervision. ## Anti-Patterns to Avoid **Dumping entire files in your prompt** — Let Claude read them itself. "Read src/auth.ts and explain the session handling" is better than pasting 500 lines. **Ignoring Claude's questions** — When Claude asks for clarification, it's because the loop doesn't have enough information to proceed confidently. Answer fully. **Rushing permission prompts** — Read what you're approving. "Always allow" is convenient but removes a safety check. **Keeping stale context** — `/clear` costs nothing. Stale context costs tokens and accuracy. **Using Claude for trivial tasks** — Sometimes typing `git status` is faster than explaining what you want to see. ## What's Next You now understand how Claude Code thinks. In [Part 3: Project Configuration](/blog/claude-code-mastery-03-project-configuration), we'll dive into CLAUDE.md—the file that shapes Claude's behavior for your specific project. A well-crafted CLAUDE.md is the difference between Claude that understands your codebase and Claude that fights against your patterns. --- --- # Claude Code Mastery Part 3: Project Configuration URL: /blog/claude-code-mastery-03-project-configuration Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Configuration, Developer Tools, Best Practices Series: Claude Code Mastery (Part 3 of 10) --- The definitive guide to CLAUDE.md, settings.json, and project memory. Learn how to teach Claude about your specific project with templates, imports, and best practices. Here's something that trips up almost everyone new to Claude Code: they start using it on a project, get mediocre results, and assume it's just not that good. But then they see someone else get amazing results on a similar codebase. What's the difference? **Configuration.** A well-configured Claude Code session doesn't just understand your code—it understands your *patterns*, your *conventions*, your team's preferences. It knows that you use Bun instead of npm, that API routes live in `app/api/`, and that your team has strong opinions about error handling. This chapter shows you how to set that up. ## CLAUDE.md: Your Project's Memory CLAUDE.md is automatically loaded when Claude Code starts. Think of it as the instruction manual you hand to a new developer on their first day—except Claude reads it every single session. ### Where to Put It Claude Code uses a hierarchical system for loading CLAUDE.md files: **Project root (most common):** [code block] Check it into git. Share it with your team. This is what you want 90% of the time. **Alternative location:** [code block] Same effect, but keeps your project root cleaner. **Global (applies to all projects):** [code block] Personal preferences that follow you everywhere. Good for things like "I prefer tabs over spaces" or "always use TypeScript strict mode." **Local variant (gitignored):** [code block] Personal tweaks you don't want to commit. API keys, experimental settings, or that controversial coding style your team doesn't share. ### Monorepo Support Here's where it gets clever. In a monorepo, you might have: [code block] When you run `claude` from `monorepo/apps/web`, both the root CLAUDE.md and the web-specific one get loaded. Child directory CLAUDE.md files are pulled in on-demand when you work with files in those directories. ## What to Put in CLAUDE.md A good CLAUDE.md answers three questions: **WHAT** — What is this project? What's the tech stack? What's the directory structure? **WHY** — What's the purpose? What problem does it solve? What are the key business concepts? **HOW** — How do you work on it? What commands run tests? What's the deployment process? How do you verify changes? ### A Practical Template [code block] ### Framework-Specific Examples **Next.js App Router:** [code block] **Python FastAPI:** [code block] **Go:** [code block] ## The Import System CLAUDE.md files can import other files using the `@path/to/file` syntax. This keeps your main file lean while still providing comprehensive context. [code block] **How imports work:** - Relative paths resolve from the CLAUDE.md file's location - Imports can be recursive (up to 5 levels deep) - Code blocks are excluded (imports inside \`\`\` aren't evaluated) - Later imports take precedence over earlier ones **Organizing with rules directory:** For larger projects, you can split instructions into multiple files: [code block] All `.md` files in `.claude/rules/` are automatically loaded as project memory. ## The # Key Shortcut Here's a productivity tip most people miss: during any Claude Code session, press `#` to add instructions to your CLAUDE.md on the fly. Found yourself repeating the same instruction? Press `#` and add it permanently. Claude will remember next time. ## Settings.json: Permissions and Automation Beyond CLAUDE.md, you can configure Claude's behavior with settings files. ### File Locations [code block] Settings merge in order of precedence: local > project > global. ### Permissions Configuration Control what Claude can do without asking: [code block] **How rules are evaluated:** 1. Deny rules are checked first (block regardless of other rules) 2. Allow rules are checked next (permit if matched) 3. Everything else prompts for approval The deny list is your security boundary. Files matching deny patterns become completely invisible to Claude—it can't even see they exist. ### Hooks: Automated Actions Hooks run commands at specific points in Claude's lifecycle: **PreToolUse** — Before Claude uses a tool (can block the action) **PostToolUse** — After a tool completes (great for formatters) **SessionStart** — When a session begins (setup environment) **Notification** — When Claude sends notifications **Stop** — When Claude finishes responding Example: Auto-format Python files after edits: [code block] Example: Prevent modifications to production configs: [code block] > **Tip:** You can also configure hooks interactively using the `/hooks` command during a session. ## Team Configuration For teams, here's what to commit vs. Gitignore: **Commit these (shared with team):** - `CLAUDE.md` — Project instructions everyone should follow - `.claude/settings.json` — Shared permissions and hooks - `.claude/rules/` — Organized instruction files - `.claude/commands/` — Team slash commands (covered in Part 4) **Gitignore these (personal):** - `CLAUDE.local.md` — Your personal preferences - `.claude/settings.local.json` — Your local overrides This lets teams enforce standards while individuals can customize their experience. ## Best Practices That Actually Matter ### Keep It Lean Research suggests frontier LLMs can reliably follow around 150-200 instructions. Beyond that, adherence drops. Smaller models handle even fewer. **Don't do this:** [code block] Claude already knows these things. You're wasting precious instruction space on generic advice. **Do this instead:** [code block] Only include what's unique to your project. ![Galaxy Brain: No CLAUDE.md → CLAUDE.md with "be helpful" → CLAUDE.md with tech stack → CLAUDE.md with exact file paths, commands, and project-specific patterns](/assets/blog/galaxy-brain-meme.jpg) ### Iterate Like a Prompt Your CLAUDE.md is part of Claude's prompt. Treat it like one. - Test whether instructions actually change behavior - Add emphasis for critical rules: "IMPORTANT:" or "YOU MUST" - Remove instructions that aren't being followed - Run it through Claude's prompt improver occasionally At Anthropic, they tune CLAUDE.md files the same way they tune prompts—continuous iteration based on what actually works. ### Use /init as a Starting Point Running `/init` generates a CLAUDE.md by analyzing your project. But it's a starting point, not a finished product. The auto-generated version captures obvious patterns (tech stack, directory structure, common commands) but misses your team's tribal knowledge. Review what Claude produces and add what matters most to how you actually work. Run `/init` again after major features or refactors to pick up new patterns. ## Testing Your Configuration After setting up your CLAUDE.md, verify it worked: [code block] Then ask: [code block] If Claude misses something important, your CLAUDE.md needs work. ## What's Next You now know how to configure Claude Code for your specific project. But configuration only goes so far—the real power comes from extending Claude with custom commands. In [Part 4: Custom Commands](/blog/claude-code-mastery-04-custom-commands), we'll build slash commands that encode your workflows into reusable actions. Think of it as teaching Claude your team's shortcuts. --- --- # Claude Code Mastery Part 4: Custom Commands URL: /blog/claude-code-mastery-04-custom-commands Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Automation, Developer Tools, Productivity Series: Claude Code Mastery (Part 4 of 10) --- Build your personal command library. Turn complex workflows into single keystrokes with custom slash commands, arguments, frontmatter, and hooks. Here's something I noticed after a few weeks of using Claude Code: I kept typing the same prompts over and over. "Review this code for security issues." "Generate tests following our project patterns." "Create a commit with a good message." ![Me typing 'review this code for security issues' for the 47th time - Again](/assets/blog/groundhog-day-meme.jpg) Every repeated prompt is wasted keystrokes. Custom slash commands fix that. ## What Are Custom Commands? Custom commands are markdown files that become slash commands. Put a file called `review.md` in the right folder, and suddenly `/review` is a command you can run anytime. [code block] The magic: these commands can accept arguments, specify which tools Claude can use, define hooks that run automatically, and even force a specific model. They're not just saved prompts—they're programmable workflows. ## Your First Command in 60 Seconds ### Step 1: Create the Directory [code block] ### Step 2: Create a Command File [code block] ### Step 3: Use It [code block] That's it. Claude executes your instructions as if you'd typed them out. ## Arguments: Making Commands Dynamic Static commands are useful, but dynamic commands are powerful. The `$ARGUMENTS` placeholder captures everything after the command name. ### Basic Arguments [code block] Usage: [code block] Whatever you type after `/explain` becomes `$ARGUMENTS`. ### Positional Arguments Need more control? Use `$1`, `$2`, `$3` for specific positions: [code block] Usage: [code block] Here `$1` = "1234", `$2` = "high", `$3` = "user reported login failures on mobile". ## Frontmatter: Command Superpowers The real power comes from YAML frontmatter at the top of your command file. This lets you configure permissions, hints, models, and more. ### The Basics [code block] The `description` shows up when you run `/help`. The `argument-hint` tells users what to pass. ### Pre-Approving Tool Permissions Tired of approving the same git commands every time? Use `allowed-tools`: [code block] Now `/git:commit` runs without permission prompts for those specific git commands. ### Forcing a Specific Model Some commands work better with specific models: [code block] Use Haiku for quick tasks, Sonnet for balanced work, Opus for complex reasoning: [code block] ### Hooks in Commands Commands can define their own hooks that run during execution: [code block] This runs Prettier after every file edit—but only while this command is active. ## Namespacing with Directories Organize related commands in subdirectories. The folder name becomes a namespace: [code block] This keeps your `/help` output organized and commands easy to discover. ## Global vs Project Commands ### Project Commands (Team) [code block] - Available only in this project - Commit to git—everyone on the team gets them - Great for project-specific workflows ### Personal Commands (You) [code block] - Available in every project - Personal productivity boosters - Not shared with team ### Recommended Split **Keep global (personal):** - `/explain` — Universal code explanation - `/debug` — Systematic debugging process - `/review` — General code review checklist **Keep project-specific (team):** - `/deploy` — Your deployment steps - `/test:e2e` — Your test setup - `/git:commit` — Your team's commit conventions - `/onboard` — Project-specific context for new devs ## 10 Commands Worth Stealing Here are battle-tested commands. Copy them, modify them, make them yours. ![The Ten Commandments of Claude Code](/assets/blog/claude-code-commandments.png) ### 1. /review — Code Review [code block] ### 2. /git:commit — Smart Commits [code block] ### 3. /test:generate — Test Creation [code block] ### 4. /debug — Systematic Debugging [code block] ### 5. /security — Security Audit [code block] ### 6. /refactor — Code Improvement [code block] ### 7. /explain — Code Explanation [code block] ### 8. /optimize — Performance Analysis [code block] ### 9. /ship — Pre-Deploy Checklist [code block] ### 10. /catchup — Context Restoration This command implements the [/clear + /catchup pattern from Part 2](/blog/claude-code-mastery-02-mental-model#the-clear--catchup-pattern) for efficient context management. [code block] ## Beyond Code: Creative Commands Commands aren't limited to code workflows. You can integrate any API or service Claude can call. Here are two I used while writing this very blog series: ### /meme — Generate Memes [code block] I used this to generate the "distracted boyfriend" meme in [Part 5](/blog/claude-code-mastery-05-skills) — developer distracted by "just ship it" while "write tests first" looks on. ### /imagen — AI Image Generation This is one of my favorites. Google's Imagen API lets you generate images directly from Claude Code. Here's a complete implementation: [code block] The real magic is in the script. Here's a complete implementation you can drop into `~/.claude/scripts/generate-image.js`: [code block] **Getting an API Key:** 1. Go to [Google AI Studio](https://aistudio.google.com/apikey) 2. Create a new API key 3. Either set `GOOGLE_AI_API_KEY` env var or paste directly in script **Usage examples:** [code block] **API Parameters:** - `sampleCount` — Number of images to generate (`1`-`4`, default: `4`) - `aspectRatio` — Output dimensions: `1:1`, `3:4`, `4:3`, `9:16`, `16:9` (default: `1:1`) - `personGeneration` — Controls people in images: - `dont_allow` — Block people entirely - `allow_adult` — Adults only (default) - `allow_all` — Adults and children - `imageSize` — Resolution: `1K` or `2K` (Standard/Ultra models only) **Available models:** - `imagen-4.0-generate-001` — Standard, best quality - `imagen-4.0-ultra-generate-001` — Ultra, highest quality - `imagen-4.0-fast-generate-001` — Fast, lower latency - `imagen-3.0-generate-001` — Previous generation Some cover images and illustrations throughout this series? Generated with `/imagen`. The memes you see here? Created with `/meme`. The point: if there's an API, you can wrap it in a command. Social media posting, translation, image generation, data lookups—anything becomes a slash command. ## Command Design Principles **Single purpose** — One command does one thing well. `/review` reviews, `/test` tests. Don't make Swiss Army knives. **Clear output format** — Tell Claude exactly how to present results. Bullet points? Categories? Severity levels? Be explicit. **Include constraints** — What should Claude NOT do? "No behavior changes" or "Don't modify tests" prevents overreach. **Use appropriate model** — Haiku for quick lookups, Sonnet for most tasks, Opus for complex reasoning. Match the model to the job. **Pre-approve safe tools** — If a command always needs git or npm, use `allowed-tools` to skip permission prompts. ## Debugging Commands If a command isn't working: 1. **Check the path** — Is it in `.claude/commands/` or `~/.claude/commands/`? 2. **Check the extension** — Must be `.md` 3. **Run `/help`** — Your command should appear in the list 4. **Check frontmatter** — YAML must be valid (proper indentation, quotes around strings with special chars) ## What's Next Custom commands encode your workflows into reusable actions. But what if you want to share commands beyond your team? What if you want commands that work across different projects without copying files everywhere? That's where Skills come in. In [Part 5: Skills](/blog/claude-code-mastery-05-skills), we'll explore how to package commands for broader distribution and create more sophisticated automation. --- --- # Claude Code Mastery Part 5: Skills URL: /blog/claude-code-mastery-05-skills Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Skills, Developer Tools, Automation Series: Claude Code Mastery (Part 5 of 10) --- Skills are specialized knowledge modules Claude automatically loads when relevant. Learn the difference between Skills and Commands, create your own, and discover powerful skill collections. You've built custom commands. They're great for workflows you trigger explicitly—`/review`, `/commit`, `/deploy`. But what about knowledge Claude should apply automatically without you having to remember to invoke it? That's what Skills do. They're specialized knowledge modules that Claude discovers and loads *when relevant*, not when you explicitly call them. Think of Commands as tools you pick up; Skills are expertise Claude develops. ## Commands vs Skills: What's the Difference? The distinction matters because it changes how you structure your automation. **Slash Commands** (from [Part 4](/blog/claude-code-mastery-04-custom-commands)): - You invoke them explicitly: `/review`, `/commit` - Loaded immediately when you call them - Great for workflows you want to trigger intentionally - Simple markdown files with optional frontmatter **Skills**: - Claude invokes them automatically when your request matches - Lazy-loaded—only the description is read at startup, full content loads on use - Great for standards, patterns, and specialized knowledge Claude should always apply - Can bundle supporting files, examples, and executable scripts Here's the mental model: **Commands are verbs** (things you do), **Skills are expertise** (things Claude knows). When you say "review this PR," you might want Claude to: 1. Run your `/review` command (explicit action) 2. Apply your team's code review standards from a Skill (automatic knowledge) Both have their place. ## How Skills Actually Work The magic of Skills is progressive disclosure. Here's the flow: **At startup**: Claude reads only the `name` and `description` from each Skill's SKILL.md file. Minimal overhead. **When you make a request**: Claude matches your request against Skill descriptions. If there's a match, it asks: "I found a Skill that might help. Want me to use it?" **On confirmation**: The full SKILL.md content loads into context. Supporting files load only if Claude needs them during execution. This means you can have dozens of Skills without performance penalty. They're loaded on-demand, not upfront. > **Tip:** Write Skill descriptions with keywords users would naturally say. "Helps with documents" won't trigger. "Review pull requests using OWASP security guidelines and team formatting standards" will. ## Skill File Structure Skills live in directories with a required `SKILL.md` file: [code block] **Key constraints:** - `SKILL.md` is required (case-sensitive filename) - Keep SKILL.md under 500 lines for optimal performance - Supporting files are discovered via links in SKILL.md - Scripts execute without loading their source into context—only output consumes tokens ## Where Skills Live Skills follow a hierarchy—higher levels override lower: **Enterprise** — Managed settings for all users in organization (highest priority) **Personal** — `~/.claude/skills/` — Your Skills, available across all projects **Project** — `.claude/skills/` — Team Skills, checked into git **Plugin** — `skills/` inside plugin directory — Available to anyone with the plugin installed The split is similar to Commands: personal Skills follow you everywhere, project Skills are shared with your team. ## The SKILL.md Template Here's the anatomy of a well-structured Skill: [code block] ## Frontmatter Options Skills support powerful configuration through YAML frontmatter: **`name`** (required) — Skill identifier. Lowercase, numbers, hyphens only. Max 64 characters. **`description`** (required) — What it does and when to use it. Max 1024 characters. This is how Claude matches requests to Skills—make it descriptive. **`allowed-tools`** — Tools Claude can use without permission prompts when this Skill is active. Comma-separated or YAML list. **`model`** — Override the model when this Skill runs. Useful for complex reasoning tasks. **`context`** — Set to `fork` to run in an isolated sub-agent context with separate conversation history. **`agent`** — Agent type when `context: fork` is set. Options: Explore, Plan, general-purpose, or custom. **`hooks`** — Define Skill-scoped hooks: PreToolUse, PostToolUse, Stop. **`user-invocable`** — Set to `false` to hide from the `/` menu while still allowing auto-discovery. **`disable-model-invocation`** — Set to `true` to prevent Claude from programmatically invoking (but still visible in menu). ## Creating Your First Skill Let's build a practical Skill: a TDD (Test-Driven Development) guide that Claude applies automatically when you're writing tests. ### Step 1: Create the Directory [code block] ### Step 2: Create SKILL.md [code block] ### Step 3: Add an Example [code block]typescript // user.test.ts It('should reject invalid email format', () => { expect(() => new User('notanemail')).toThrow('Invalid email'); }); [code block]typescript // user.ts Constructor(email: string) { if (!email.includes('@')) { throw new Error('Invalid email'); } this.email = email; } [code block]typescript Private validateEmail(email: string): void { if (!email.includes('@')) { throw new Error('Invalid email'); } } [code block]typescript It('should reject email without domain', () => { expect(() => new User('test@')).toThrow('Invalid email'); }); [code block]` ### Step 4: Use It Now when you say "let's add email validation with proper testing," Claude will: 1. Recognize this matches the TDD Skill description 2. Ask if you want to use the TDD Skill 3. Follow the Red → Green → Refactor cycle automatically ![Distracted boyfriend meme - developer looking at "just ship it, add tests later" while "write tests first" looks on disapprovingly](/assets/blog/tdd-distracted-boyfriend.jpg) ## The Superpowers Collection The most popular Skills library is [obra/superpowers](https://github.com/obra/superpowers) with 16.5k+ stars. It's battle-tested and covers common development workflows. ### Installation [code block] Or clone directly: [code block] > **Note:** Skills use directory namespacing similar to how [commands are organized in Part 4](/blog/claude-code-mastery-04-custom-commands#namespacing-with-directories). The same organizational patterns apply—subdirectories create namespaces for better organization. ### Key Skills Included **`/superpowers:brainstorm`** — Structured ideation before starting complex features. Use this before jumping into code. **`/superpowers:write-plan`** — Create implementation plans for migrations or multi-file refactors. **`/superpowers:execute-plan`** — Run plans in controlled batches with checkpoints. **`/superpowers:tdd`** — Test-driven development with the full Red/Green/Refactor cycle. **`/superpowers:debug`** — Systematic debugging with root cause tracing. ### Why It Works Superpowers isn't just prompts—it's production-proven patterns. The TDD Skill, for example, includes: - Anti-pattern detection (writing tests after code) - Async testing patterns - Proper assertion structure - Refactoring triggers These patterns come from real-world usage across thousands of developers. ## Practical Skill Ideas Here are Skills worth building for your team: **API Documentation** — Automatically apply your API documentation standards when Claude writes or updates endpoints. **Error Handling** — Enforce consistent error handling patterns (your ApiError class, logging format, user-facing messages). **Database Patterns** — Apply your team's conventions for queries, transactions, migrations. **Security Review** — Automatically check for OWASP Top 10 issues when reviewing code. **Component Patterns** — Enforce your React/Vue/Svelte component structure and naming. **Commit Messages** — Apply conventional commit format automatically. ## Skills + Scripts: The Power Combo Here's something underutilized: Skills can bundle executable scripts that run without loading their source into context. [code block] In your SKILL.md: [code block] Claude can execute these scripts, but only the *output* consumes context—not the script source. This is perfect for complex validation logic that's more reliable as tested code than LLM-generated commands. ## Skills vs Commands: When to Use Each **Use Commands when:** - You want explicit control over when something runs - The workflow is user-initiated (deploy, commit, generate) - You need argument passing (`$ARGUMENTS`, `$1`, `$2`) - The action is discrete and standalone **Use Skills when:** - Knowledge should apply automatically based on context - You want consistent standards without remembering to invoke them - The expertise spans multiple types of requests - You're encoding team knowledge Claude should always have **Use both together:** - `/review` command triggers the review - `code-review` Skill provides the standards to apply ## Skill Availability Skills require a paid plan: - **Pro, Max, Team, Enterprise** — Full Skills support - **Free tier** — No Skills If you're on a free plan and want to encode team knowledge, use the `.claude/rules/` directory from [Part 3](/blog/claude-code-mastery-03-project-configuration) instead. Rules are always loaded; Skills are lazy-loaded. ## Discovering Skills **GitHub Topics:** [code block] **Community Collections:** - [obra/superpowers](https://github.com/obra/superpowers) — Core development patterns - [obra/superpowers-skills](https://github.com/obra/superpowers-skills) — Community-contributed Skills - [SkillsMP](https://skillsmp.com) — Marketplace for Claude/ChatGPT Skills **The Agent Skills Specification** — Released December 2025 as an open standard. OpenAI adopted the same format for Codex CLI and ChatGPT, so Skills you build for Claude work across tools. ## What's Next Skills give Claude specialized knowledge that activates automatically. But what if you need Claude to do multiple things simultaneously? What if one task could benefit from parallel execution? That's where Subagents come in. In [Part 6: Subagents](/blog/claude-code-mastery-06-subagents), we'll explore how to spawn specialized agents that work in parallel—running tests while generating documentation while checking for security issues, all at once. --- --- # Claude Code Mastery Part 6: Subagents URL: /blog/claude-code-mastery-06-subagents Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Subagents, Automation, Parallel Processing Series: Claude Code Mastery (Part 6 of 10) --- Spawn specialized AI workers that run in parallel. Learn how Claude Code's Task tool delegates work to subagents, when to use built-in vs custom agents, and patterns for orchestrating multi-agent workflows. You're working on a feature and realize you need three things: security review of your changes, test coverage analysis, and documentation updates. Sequentially, that's a lot of waiting. What if Claude could work on all three simultaneously? That's what subagents do. They're specialized AI workers with their own context windows that run in parallel, each focused on a specific task, returning only the relevant results to your main conversation. ## What Are Subagents? Subagents are isolated Claude instances that the main Claude spawns to handle specific tasks. The key insight: **each subagent has its own context window**. This matters because: - Your main conversation stays clean—no clutter from research tangents - Subagents can work in parallel without interfering with each other - Only distilled results come back, not the full exploration history - Complex tasks get broken into manageable pieces Think of it like delegating to team members. You don't need to see every Google search they did—you just need their conclusions. ![Drake meme - rejecting doing everything yourself sequentially, approving delegating to specialized subagents](/assets/blog/subagents-drake.jpg) ## The Task Tool: How It Works Under the hood, Claude uses the **Task tool** to spawn subagents. When you ask Claude to do something complex, it can delegate parts to specialized workers: [code block] Claude can run up to 7 subagents simultaneously. The results merge back into your main conversation as clean, summarized output. ## Built-in Subagent Types Claude Code ships with several built-in agent types you can invoke: **`Explore`** — Fast codebase exploration. Use for finding files, searching patterns, understanding architecture. Read-only, optimized for speed. [code block] **`Plan`** — Software architect mode. Designs implementation strategies, identifies critical files, considers trade-offs. Returns step-by-step plans. [code block] **`Bash`** — Command execution specialist. Runs shell commands, handles git operations, executes scripts. [code block] **`general-purpose`** — The Swiss Army knife. Research, multi-step tasks, anything that doesn't fit the specialized types. [code block] ## When Claude Uses Subagents Automatically Claude doesn't always announce when it's using subagents—it just does it when efficient. Common automatic delegation: - **Plan mode exploration** — When you enter plan mode, Claude often spawns an Explore agent to understand your codebase before proposing changes - **Parallel file analysis** — Reading multiple unrelated files simultaneously - **Research tasks** — Web searches and documentation lookups - **Pattern searches** — Finding code patterns across large codebases You'll see activity indicators when subagents are working. The results appear as if Claude did everything itself—because technically, it did. ![Always has been meme - all these Claudes working in parallel, they're the same Claude](/assets/blog/subagents-same-claude.jpg) ## Creating Custom Subagents For specialized, repeatable tasks, create your own subagents. They live in `.claude/agents/`: [code block] ### AGENT.md Format [code block] ### Frontmatter Options **`name`** (required) — Identifier for the agent. Lowercase, hyphens allowed. **`description`** (required) — When to use this agent. Claude matches your requests against this description to decide whether to invoke it. Be specific. **`tools`** — Which tools the agent can use. Comma-separated: `Read, Grep, Glob, Bash, Write, Edit`. Principle of least privilege—only grant what's needed. **`model`** — Which Claude model to use: - `sonnet` — Fast, cost-effective (default) - `opus` — Maximum capability for complex reasoning - `haiku` — Fastest, for simple tasks - `inherit` — Use same model as main conversation **`color`** — Visual identifier in UI (optional) ## Invoking Custom Subagents Once created, invoke subagents naturally: [code block] Claude recognizes your agent and spawns it with the appropriate context. ## Parallel Execution Patterns ### Fan-Out: Multiple Perspectives Analyze the same code from different angles simultaneously: [code block] [code block] Each agent focuses on its specialty. You get comprehensive feedback faster than sequential review. ### Pipeline: Sequential Handoff Chain agents where each builds on the previous: [code block] Use when later stages depend on earlier results. ### Specialist Routing Route tasks to domain experts: [code block] ## Example: Building a Review Team Let's create a multi-agent code review system. ### Agent 1: Security Auditor [code block] ### Agent 2: Performance Analyzer [code block] ### Agent 3: Test Coverage [code block] ### Using Them Together [code block] Claude spawns all three, they work simultaneously, and you get a comprehensive report. ## Best Practices ### 1. Single Responsibility Each agent should do one thing well. Don't create a "does everything" agent—that's just Claude with extra steps. ### 2. Specific Descriptions The description field determines when Claude auto-invokes your agent. Be specific: [code block] ### 3. Minimal Permissions Only grant tools the agent needs: [code block] ### 4. Clear Output Format Specify exactly how you want results. Agents that return consistent formats are easier to work with. ### 5. Constraints Section Tell the agent what NOT to do. Prevents scope creep and unexpected behavior. ## Popular Subagent Collections Don't build everything from scratch. These collections are battle-tested: **[VoltAgent/awesome-claude-code-subagents](https://github.com/VoltAgent/awesome-claude-code-subagents)** — 100+ agents for general development workflows. **[wshobson/agents](https://github.com/wshobson/agents)** — 99 agents and 15 orchestrators for production workflows. **[rshah515/claude-code-subagents](https://github.com/rshah515/claude-code-subagents)** — 133+ agents covering the full SDLC. Install by cloning into your `.claude/agents/` directory: [code block] ## Commands vs Skills vs Subagents **Commands** — Prompt templates you invoke explicitly with `/command`. They share context with your main conversation and execute sequentially. Best for repeated prompts where you want explicit control. **Skills** — Knowledge modules that Claude auto-matches to your requests. They load into your main context when relevant. Best for standards and patterns that should apply automatically. **Subagents** — Specialized workers with independent context windows. Can run in parallel and return only distilled results. Best for complex parallel tasks or when you need isolated exploration. ## Limitations - **No nesting**: Subagents cannot spawn other subagents - **Token cost**: Each subagent uses its own context, which costs tokens - **Coordination**: Complex multi-agent workflows require clear orchestration - **Write conflicts**: Be careful with multiple agents writing to the same files ## What's Next Subagents let Claude delegate and parallelize work. But what if you want Claude to connect to external services—databases, APIs, or custom tools your team has built? That's where MCP (Model Context Protocol) comes in. In [Part 7: MCP Servers](/blog/claude-code-mastery-07-mcp-servers), we'll explore how to extend Claude's capabilities by connecting it to external data sources and services. --- --- # Claude Code Mastery Part 7: MCP Servers URL: /blog/claude-code-mastery-07-mcp-servers Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, MCP, Integration, Developer Tools Series: Claude Code Mastery (Part 7 of 10) --- Connect Claude to databases, APIs, and external services with Model Context Protocol. Learn to configure MCP servers, manage them with CLI commands, and extend Claude's capabilities beyond your filesystem. You're debugging an issue and need to check the database. So you open a terminal, connect to PostgreSQL, write a query, copy the results, paste them into Claude, explain the schema... Wouldn't it be easier if Claude could just query the database directly? ![Left exit 12 off ramp meme - choosing MCP over copy-paste workflow](/assets/blog/mcp-exit-ramp.jpg) That's what MCP servers enable. They're bridges that connect Claude to external services—databases, GitHub, cloud platforms, documentation sources—so Claude can work with your entire development ecosystem, not just your local files. ## What is MCP? Model Context Protocol (MCP) is an open standard for AI-tool integrations. Think of it as a universal adapter that lets Claude connect to any service that speaks the protocol. [code block] The MCP server handles all the complexity—authentication, API protocols, data transformation—so Claude can interact naturally. You ask "show me users who signed up last week," and Claude queries your database directly. **Why it matters:** - **No copy-paste workflows** — Claude accesses data at the source - **Real-time information** — Always current, not stale context - **Secure by design** — Servers handle credentials, not Claude - **Open standard** — Works across tools, not just Claude > **Building on the basics:** While [essential commands from Part 1](/blog/claude-code-mastery-01-getting-started#essential-commands) control Claude's behavior within your local project, MCP servers extend what Claude can access beyond your filesystem—databases, APIs, external services, and more. ## Security First **MCP servers execute code on your system.** Before using any server: 1. **Review the source** — Check what permissions it needs 2. **Use read-only when possible** — Prevent accidental modifications 3. **Scope access narrowly** — Grant access to specific paths/tables 4. **Trust your sources** — Only use well-maintained servers 5. **Protect credentials** — Use environment variables, never hardcode Third-party MCP servers aren't verified by Anthropic. Servers that fetch external content can expose you to prompt injection. Be careful. ## Configuration Locations MCP servers can be configured at different scopes: **Project-scoped (recommended for teams):** [code block] **User-scoped (personal tools):** [code block] Project-scoped configuration (`.mcp.json`) is the cleanest option—check it into git and everyone on your team gets the same setup. ## Configuration Structure [code block] **`command`** — The executable to run (usually `npx` for Node-based servers) **`args`** — Command arguments, typically the package name **`env`** — Environment variables, using `${VAR}` syntax for secrets ## The /mcp Command Manage MCP servers from within Claude Code: [code block] From the terminal, use CLI commands: [code block] **Scope options:** - `--scope local` — Session only, temporary - `--scope user` — Persistent across all projects - `--scope project` — Saved to `.mcp.json` for team sharing ## Essential MCP Servers ### GitHub Full repository integration—PRs, issues, code review, releases. [code block] **What Claude can do:** - Create and review pull requests - Manage issues and labels - Search code across repositories - Trigger workflows **Example:** [code block] ### PostgreSQL Direct database access with schema awareness. [code block] **What Claude can do:** - Query tables with natural language - Understand schema relationships - Generate migrations - Debug data issues **Example:** [code block] ### Context7 Real-time, version-specific library documentation. [code block] **What Claude can do:** - Fetch current docs for any npm package - Get version-specific API details - Avoid hallucinated APIs - Stay current with library changes **Example:** [code block] This is particularly valuable because Claude's training data has a cutoff—Context7 gives it access to documentation released after training. ![Surprised Pikachu meme - Claude confidently using deprecated API from 2023](/assets/blog/mcp-pikachu-api.jpg) ### Filesystem Access files beyond your current project. [code block] **What Claude can do:** - Read files from other directories - Access shared configuration - Work across multiple projects **Example:** [code block] ### Supabase Full Supabase platform integration. [code block] **What Claude can do:** - Query Supabase tables - Manage auth users - Work with storage buckets - Generate RLS policies ### Web Fetch Fetch and process web content. [code block] **What Claude can do:** - Fetch web pages and APIs - Process documentation sites - Check service status - Research current information ### Playwright Browser automation for testing and web interaction. [code block] **What Claude can do:** - Navigate web pages and interact with elements - Take screenshots and capture page state - Fill forms and click buttons - Run end-to-end test scenarios - Debug UI issues by seeing what's on screen **Example:** [code block] ## Server Categories ### Databases **PostgreSQL** — `@modelcontextprotocol/server-postgres` — Full SQL access **MySQL** — `@modelcontextprotocol/server-mysql` — MySQL integration **SQLite** — `@modelcontextprotocol/server-sqlite` — Local databases **MongoDB** — Community servers available — Document databases ### Cloud & DevOps **GitHub** — `@modelcontextprotocol/server-github` — Full GitHub API **GitLab** — Community servers — GitLab integration **Cloudflare** — `@cloudflare/mcp-server-cloudflare` — Workers, KV, D1, R2 **AWS** — Community servers — S3, Lambda, etc. ### Testing & Automation **Playwright** — `@anthropic/mcp-server-playwright` — Browser automation and E2E testing ### Documentation & Research **Context7** — `@context7/mcp-server` — Real-time library docs **Perplexity** — Community servers — AI-powered search **Fetch** — `@modelcontextprotocol/server-fetch` — Web content ## Multiple Servers Together Most projects benefit from several MCP servers working together: [code block] Claude automatically uses the right server based on your request. Ask about database schema, it uses PostgreSQL. Ask about a PR, it uses GitHub. ## Practical Workflows ### Database-Driven Development [code block] ### Cross-Service Automation [code block] ### Documentation-Aware Coding [code block] ## Troubleshooting ### Server Won't Connect 1. **Check status:** Run `/mcp` to see connection state 2. **Enable debug mode:** `claude --mcp-debug` for detailed logs 3. **Verify credentials:** Ensure environment variables are set 4. **Test manually:** Run the server command directly in terminal ### Windows-Specific Issues On native Windows (not WSL), npx requires a wrapper: [code block] Without `cmd /c`, you'll get "Connection closed" errors. ### Token Limits MCP outputs have limits to prevent context overflow: - **Warning threshold:** 10,000 tokens - **Default max:** 25,000 tokens Adjust with `MAX_MCP_OUTPUT_TOKENS` environment variable if needed. ### Server Not Responding Some servers need time to initialize. If a server shows "failed" immediately after startup, wait a moment and run `/mcp` again. ## Best Practices ### 1. Principle of Least Privilege Request only necessary permissions: [code block] ### 2. Environment Variables for Secrets Never hardcode tokens: [code block] Set variables in your shell profile or `.env` file. ### 3. Project vs User Scope - **Project scope** (`.mcp.json`): Database connections, project-specific tools - **User scope** (`~/.claude/settings.json`): GitHub, personal utilities ### 4. Review Server Code Before installing any MCP server, check: - What commands it can execute - What network access it needs - Who maintains it - Recent activity and issues ## Finding MCP Servers **Official servers:** - [modelcontextprotocol.io/examples](https://modelcontextprotocol.io/examples) **Community directories:** - [mcpservers.org](https://mcpservers.org) - [mcpcat.io](https://mcpcat.io) - [awesome-mcp-servers](https://github.com/punkpeye/awesome-mcp-servers) **Search GitHub:** - Topic: `mcp-server` - Search: `"@modelcontextprotocol"` in package.json ## What's Next MCP servers extend Claude's reach to external services. Combined with everything we've covered—commands, skills, and subagents—you now have a powerful toolkit for AI-assisted development. In [Part 8: Production Workflows](/blog/claude-code-mastery-08-production-workflows), we'll put it all together with real-world patterns: CI/CD integration, team collaboration, and workflows that scale from side project to production system. --- --- # Claude Code Mastery Part 8: Production Workflows URL: /blog/claude-code-mastery-08-production-workflows Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, GitHub Actions, CI/CD, Production Series: Claude Code Mastery (Part 8 of 10) --- Move from experimentation to production with GitHub Actions integration, @claude mentions, automated PR reviews, and team workflows that scale. You've been using Claude Code locally—commands, skills, subagents, MCP servers. It's transformed how you work. But what about when you're not at your terminal? What about your team? Claude Code's GitHub integration lets you @mention Claude directly in issues and PRs. Your team can get AI assistance without everyone installing anything locally. And with GitHub Actions, Claude can review every PR automatically. ## The GitHub Integration With a simple `@claude` mention in any PR or issue, Claude can: - **Analyze code** — Review PRs, explain changes, find issues - **Create pull requests** — Implement features from issue descriptions - **Fix bugs** — Investigate, identify root cause, submit patches - **Answer questions** — Explain codebase patterns and architecture - **Follow your standards** — Uses your CLAUDE.md conventions All of this happens asynchronously. You mention `@claude`, go grab coffee, and come back to a PR or detailed analysis. ## Installation The easiest setup is through Claude Code itself: [code block] This command guides you through: 1. Installing the Claude GitHub App on your repository 2. Authorizing the required permissions 3. Setting up the `ANTHROPIC_API_KEY` secret You need repository admin access to complete installation. ### Manual Setup If you prefer manual configuration: 1. **Create the workflow file** at `.github/workflows/claude.yml`: [code block] 2. **Add your API key**: Repository Settings → Secrets → Add `ANTHROPIC_API_KEY` 3. **Commit and push** the workflow file ## @claude Mentions Once installed, mention `@claude` in any issue or PR to trigger assistance. ### In Issues [code block] Claude analyzes the issue, creates a plan, and opens a PR with the implementation. [code block] Claude explores the codebase, identifies potential causes, and reports findings. ### In Pull Requests [code block] Claude analyzes the diff, comments on specific lines, and provides a summary. [code block] Claude reads the changes, understands the context, and explains the reasoning. ### In PR Comments [code block] Claude updates the code and pushes a new commit to the PR. [code block] Claude writes and commits the additional tests. ## Workflow Triggers Configure when Claude activates: ### Comment-Triggered (Interactive) [code block] Claude responds to `@claude` mentions in comments. Most flexible—team members trigger it when needed. ### PR-Triggered (Automatic Review) [code block] Claude automatically reviews every new PR and when new commits are pushed. Add a prompt to specify what to look for: [code block] ### Issue-Triggered (Auto-Triage) [code block] Claude can triage new issues, add labels, or start implementation when specific labels are applied. ### Scheduled (Maintenance) [code block] Run maintenance tasks: dependency updates, documentation refresh, codebase health checks. ## Production Workflow Patterns ### Pattern 1: Dual-Loop Review Combine automated checks with AI review: [code block] **Why it works:** Linters catch syntax issues. Claude catches design problems. Together, they're comprehensive. ![Two buttons meme - sweating over trusting the linter vs trusting Claude's review](/assets/blog/production-two-buttons.jpg) ### Pattern 2: Spec-Driven Development Structure your workflow from requirements to implementation: [code block] **Example issue:** [code block] ### Pattern 3: Bug Fix Pipeline Streamlined bug investigation and fixing: [code block] Claude will: 1. Analyze the codebase for address handling 2. Identify the root cause 3. Create a PR with the fix 4. Add tests for international addresses ### Pattern 4: Path-Specific Reviews Trigger different review depth based on what changed: [code block] ## Team Configuration ### Shared CLAUDE.md for Teams Your project's CLAUDE.md is used by both local Claude Code and GitHub Actions. Include team-specific instructions: > **Important:** When you @mention Claude on GitHub, it automatically reads your project's [CLAUDE.md configuration from Part 3](/blog/claude-code-mastery-03-project-configuration#claudemd-your-projects-memory) to understand your team's standards, conventions, and restrictions. [code block] ### Permission Boundaries Configure what Claude can and cannot do: [code block] **Always require human approval for merges.** Claude can review, suggest, and even commit—but a human should click the merge button. ![Change my mind meme - Claude should auto-merge PRs without human review](/assets/blog/production-change-my-mind.jpg) ## Authentication Options ### Direct API (Recommended) [code block] Most straightforward. You pay per-token through your Anthropic account. ### Amazon Bedrock [code block] For enterprise environments with AWS infrastructure. ### Google Vertex AI [code block] For teams on Google Cloud. ## Cost Management GitHub Actions usage + API tokens can add up. Monitor and control costs: ### Set Token Limits [code block] ### Limit Trigger Frequency [code block] ### Use Labels as Gates [code block] Only trigger when a specific label is applied. ## Best Practices ### 1. Be Specific with Instructions [code block] ### 2. Provide Context [code block] ### 3. Iterate and Refine Treat Claude like a junior developer who benefits from feedback: [code block] ### 4. Document Team Commands Create a team reference for @claude usage: [code block] > **Note:** For organizing team-wide custom commands that work locally (not just in GitHub), see [Global vs Project Commands in Part 4](/blog/claude-code-mastery-04-custom-commands#global-vs-project-commands). ### 5. Protect Sensitive Code Configure Claude to flag rather than modify critical areas: [code block] ## Troubleshooting ### Claude Doesn't Respond to Mentions 1. Check the workflow file exists at `.github/workflows/claude.yml` 2. Verify `ANTHROPIC_API_KEY` secret is set 3. Check Actions tab for workflow run logs 4. Ensure issue_comment trigger is configured ### Response is Cut Off Token limits may be too low. Increase `max_tokens` in the action config. ### Claude Makes Wrong Assumptions Add more context to CLAUDE.md or be more specific in your mention. ### High Costs - Use label-gated triggers instead of automatic on every PR - Limit to specific file paths - Reduce max_tokens - Use Haiku for simple tasks, Opus for complex ones ## What's Next You now have Claude integrated into your development workflow—locally and on GitHub. Commands, skills, subagents, MCP servers, and GitHub Actions give you a complete AI-assisted development toolkit. In [Part 9: Power User Secrets](/blog/claude-code-mastery-09-power-user-secrets), we'll explore advanced techniques that experienced users rely on: prompt engineering patterns, debugging strategies, and workflows that maximize Claude's capabilities. --- --- # Claude Code Mastery Part 9: Power User Secrets URL: /blog/claude-code-mastery-09-power-user-secrets Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Power User, Productivity, Hidden Features Series: Claude Code Mastery (Part 9 of 10) --- The tips and tricks that separate casual users from Claude Code wizards. Learn ultrathink, git worktrees, keyboard shortcuts, and hidden features that boost productivity. The tips and tricks that separate casual users from Claude Code wizards. Some of these are documented, some discovered through experimentation, and some came from decompiling the Claude Code bundle. ## Extended Thinking: The Real Story We first introduced extended thinking in [Part 2](/blog/claude-code-mastery-02-mental-model#extended-thinking-when-claude-needs-to-think-harder), but here's the complete story. You've probably heard about "ultrathink" and "think harder." Here's what actually works according to [Anthropic's official documentation](https://www.anthropic.com/engineering/claude-code-best-practices). ### The Think Keyword Hierarchy Claude Code maps specific phrases to thinking budgets. These are the **officially documented** trigger words: **Think** (~4,000 tokens) — `think` **Megathink** (~10,000 tokens) — `think hard`, `think deeply`, `think more`, `megathink` **Ultrathink** (~32,000 tokens) — `think harder`, `think very hard`, `ultrathink` [code block] The higher the budget, the more "mental runway" Claude gets to consider multiple approaches, evaluate trade-offs, and catch edge cases. ![Claude Code ultrathink keyword with rainbow highlighting and max thinking indicator](/assets/blog/ultrathink-diagram.png) ### January 2026 Updates A few things changed in Claude Code 2.1: - **Thinking enabled by default for Opus 4.5** — No need to manually trigger it for complex tasks - **Toggle changed from Tab to Alt+T** — Avoids accidental activation - **Real-time thinking display** — Press `Ctrl+O` to see Claude's reasoning as it happens ### When to Use Each Level **ultrathink** (~32K tokens) — Architecture decisions, security reviews, complex debugging, anything you'd schedule a meeting for **think hard / megathink** (~10K tokens) — Multi-file refactoring, API design, debugging tricky issues **think** (~4K tokens) — Quick analysis, code review, understanding unfamiliar code **No keyword** — Simple fixes, typos, routine tasks where extended thinking just adds latency **The rule:** Match thinking budget to problem complexity. Don't ultrathink a typo fix, don't skip thinking on architecture. ### Stay Current with Recent Information Claude's training data has a cutoff date. For the latest library APIs, recent framework updates, or current best practices: **Use Context7 MCP Server:** Fetches real-time, version-specific documentation. See [Part 7: MCP Servers](/blog/claude-code-mastery-07-mcp-servers#context7) for setup. [code block] **Ask Claude to search online:** Claude can perform web searches for recent information. [code block] This ensures you're working with current APIs, not deprecated ones from Claude's training data. ## Multi-Window Power Moves Don't wait for one task to finish. Run multiple Claude Code instances simultaneously—each in its own terminal, each with isolated context. ### Sound Notifications: Know When Claude's Done When juggling multiple windows, you need to know when Claude finishes without constantly checking. Enable the terminal bell: [code block] You'll hear a beep when Claude completes a task or needs input. No more context-switching to check progress. For richer notifications, use a hook in `~/.claude/settings.json`: [code block] This plays a sound on macOS whenever Claude needs attention. On Linux, use `paplay` or `aplay` instead. Windows users can use `powershell.exe -c "[System.Media.SystemSounds]::Beep.Play()"`. ### Cross-Repo Work: Reference Without Modifying Working on a frontend that calls a backend API? Don't run Claude in both repos simultaneously. Instead, give Claude read-only access to the backend from your frontend project. In your frontend's `.claude/settings.json`: [code block] Now Claude can read the backend controllers and models to understand the API, but can't accidentally modify them. Ask things like: [code block] Claude reads the backend, understands the API shape, and writes only to your frontend. ### Same Project, Different Branches: Git Worktrees When you need to work on multiple features in the *same* repo, git worktrees let you check out different branches in separate directories: [code block] Now you have: [code block] Run Claude in each directory—completely isolated sessions, no branch switching conflicts. ### Why This Works - Each Claude session has isolated context - Read-only access prevents accidental cross-repo changes - You can reference external code without risking modifications - Context from one task doesn't pollute another ### Cleanup Worktrees When done with a feature: [code block] ## Screenshot and Image Input Claude Code can see images. Drag and drop screenshots directly into your terminal. ### How It Works 1. Take a screenshot (Cmd+Shift+4 on Mac) 2. Drag the image file into your Claude Code terminal 3. Ask about it ### Use Cases **UI Debugging:** [code block] **Design Implementation:** [code block] **Error Screenshots:** [code block] ## Mermaid Diagram Generation Have Claude create visual diagrams you can render anywhere. ### Architecture Diagrams [code block] Output: [code block] ### Where to Render - **[mermaid.live](https://mermaid.live/)** - Online editor with live preview - **GitHub** - Renders mermaid in markdown files and comments - **VS Code** - With mermaid preview extension - **Notion** - Supports mermaid code blocks ### Diagram Types [code block] [code block] [code block] ## Memory Bank Pattern Persist context across sessions without bloating CLAUDE.md. This pairs well with the [Document + /clear Strategy](/blog/claude-code-mastery-02-mental-model#the-document--clear-strategy) from Part 2. ### Setup Create a memory directory: [code block] ### Session Memory File [code block] ### Using Memory Files Start sessions with: [code block] Update at end of sessions: [code block] ## Token Optimization Tricks Save money and improve response quality. ### Prefer /clear Over /compact > **January 2026 Update:** `/compact` is now instant (v2.0.64+) and auto-compact triggers at 75% capacity. Also try `/context` to check MCP server usage before compacting. As we covered in [Part 2](/blog/claude-code-mastery-02-mental-model), `/compact` is tempting but risky. The summarization is opaque—you don't know what gets preserved or lost. **Most of the time, `/clear` is better:** - Starting something new - Less than half your context is relevant - You can easily re-establish what matters **If you must use `/compact`, guide it:** [code block] ### Reference, Don't Paste **Bad (wastes tokens):** [code block] **Good (efficient):** [code block] Claude will read the file itself, only loading what it needs. **Pro tip: Copy Path in VS Code.** Right-click any file → **Copy Relative Path** (or `Cmd+Shift+Alt+C`). Paste the exact path instead of typing it. No typos, no ambiguity—especially useful for deep nested files or when multiple files have similar names. Combine with line numbers for surgical precision: `src/auth/login.ts:45-60` ### The Document & Clear Pattern > **January 2026 Update:** This is now built into Plan Mode. When you accept a plan, Claude auto-clears context and loads the plan fresh. Opt out if you prefer keeping context. For complex multi-session work, use explicit documentation: [code block] You control exactly what persists. No guessing what the summarizer kept. ## Model Switching Strategies Different models for different tasks. ### When to Use Each Model **Opus** ($$$) — Complex architecture, security reviews, novel problems that need maximum reasoning power. **Sonnet** ($$) — General coding and everyday tasks. This is the default and handles most work well. **Haiku** ($) — Quick questions, simple tasks, high-volume operations where speed matters more than depth. ### Strategic Model Usage **Start complex features with opus:** [code block] **Switch to sonnet for implementation:** [code block] **Use haiku for quick checks:** [code block] ## Headless Mode Automation Script Claude Code for CI/CD and automation using the `-p` (print) flag. ### Basic Usage [code block] The `-p` flag runs Claude non-interactively and exits when complete. Perfect for scripts and pipelines. ### Output Formats Control how responses are returned: [code block] ### Tool Auto-Approval Pre-approve tools to avoid permission prompts in CI: [code block] ### Pipeline Examples [code block] ### Session Continuation Continue conversations programmatically: [code block] ## Keyboard Shortcuts Master these to navigate Claude Code faster. ### Essential Shortcuts **Escape** — Interrupt Claude mid-response. Useful when you see it heading in the wrong direction. **Escape, Escape** (double-tap) — Open command history. Scroll through previous prompts and re-run them. **Ctrl+C** — Cancel current operation entirely. **Ctrl+L** — Clear terminal display (not context, just visual clutter). **Up Arrow** — Cycle through your recent prompts. ### In VS Code Extension The Claude Code VS Code extension brings the same capabilities into your editor. It's not a separate product—it's Claude Code running natively in VS Code, sharing the same configuration, commands, and MCP servers. **Cmd+L** — Open the Claude panel. This is where you'll spend most of your time—a persistent chat interface that understands your workspace. **Cmd+Shift+L** — Send selected code to Claude. Highlight code, hit the shortcut, and Claude sees exactly what you're looking at. **Cmd+I** — Inline editing mode. Claude edits code in place with a diff view—accept or reject changes without leaving your file. **Cmd+Shift+P** → "Claude" — Access all Claude commands from the command palette. The extension shares your `.claude/` configuration, so custom commands, MCP servers, and CLAUDE.md instructions work identically. The main advantage over terminal Claude Code: visual diffs, inline suggestions, and tighter editor integration. ### Voice Input: Talk to Claude When I'm alone in the room, I often switch to voice. It's faster than typing for explaining complex problems, and sometimes talking through an issue helps clarify your thinking before Claude even responds. On Mac, setup is simple: **System Settings → Keyboard → Dictation → Enable**. Then use the dictation shortcut (double-tap Fn by default) anywhere—including the Claude panel. ![Voice dictation settings on macOS](/assets/blog/voice.png) Voice works especially well for: - Explaining bugs you've been staring at for hours - Describing UI changes while looking at a mockup - Quick questions when your hands are on the trackpad - Rubber-ducking problems out loud The combination of voice input with Claude's understanding makes for surprisingly natural conversations about code. ## Hidden Productivity Hacks ### The "As If" Pattern [code block] [code block] Sets quality expectations without explicit requirements. ![Roll Safe meme - Can't write bad code if you pretend to be a senior Stripe engineer](/assets/blog/roll-safe-stripe-engineer.jpg) ### The Constraint Pattern [code block] [code block] Constraints often lead to better solutions. ### The Explain First Pattern [code block] Forces Claude to think before acting. Catch bad approaches early. ### The Rubber Duck Prompt [code block] Claude as debugging partner, not just code generator. ## What's Next You've learned the power user secrets—ultrathink for deep reasoning, worktrees for parallel development, headless mode for automation, and the productivity patterns that make a difference. In [Part 10: Vibe Coding](/blog/claude-code-mastery-10-vibe-coding), we'll wrap up with the philosophy of working with AI—when to let Claude drive, when to take the wheel, and how to develop your own intuition for AI-assisted development. --- --- # Claude Code Mastery Part 10: Vibe Coding Philosophy URL: /blog/claude-code-mastery-10-vibe-coding Published: 2026-01-15 Author: Jo Vinkenroye Tags: Claude Code, AI, Vibe Coding, Philosophy, Productivity Series: Claude Code Mastery (Part 10 of 10) --- The mindset that separates people who use AI from people who flow with it. Learn when to let Claude drive, when to take the wheel, and how to develop intuition for AI-assisted development. You've learned the tools. Commands, skills, subagents, MCP servers, headless automation—you know how Claude Code works. But knowing the tools isn't the same as flowing with them. This final chapter is about mindset. The philosophy that turns competent Claude Code users into developers who move at a different speed entirely. ## What is Vibe Coding? The term comes from Andrej Karpathy's viral tweet in February 2025: > "There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs are getting too good." He described his workflow: accepting all changes without reading diffs, copy-pasting error messages with no comment, asking the AI for trivial things like "decrease the padding on the sidebar by half" because he's too lazy to find them. Then he added the crucial caveat: **"This is not too bad for throwaway weekend projects, but still quite amusing."** That caveat matters. Vibe coding isn't a universal approach—it's a mode of working that's appropriate for specific contexts. ## The Spectrum of AI-Assisted Development Simon Willison put it well: "Not all AI-assisted programming is vibe coding." There's a spectrum: **Pure Vibe Coding** — Accept all, don't read diffs, copy-paste errors, let the AI figure it out. Great for weekend hacks and throwaway prototypes. **Guided AI Development** — You direct, Claude implements, you review. The approach we've covered throughout this series. **AI-Augmented Engineering** — Traditional engineering discipline with AI acceleration. Design docs, code reviews, testing, security vetting—all still happen, just faster. Most professional work lives in the middle. Pure vibe coding on one end is risky for anything beyond experiments. Pure manual coding on the other end means leaving speed on the table. The skill is knowing where on the spectrum you should be for any given task. ## When to Vibe Vibe coding works when: - **It's a prototype** — You're exploring an idea, not shipping to users - **You can throw it away** — If it doesn't work, you'll start fresh - **Security doesn't matter** — No real user data, no production credentials - **You're the only user** — Nobody else depends on this code - **Speed matters more than quality** — You're validating a concept, not building infrastructure **Example vibe-appropriate tasks:** [code block] ## When NOT to Vibe Pull back from vibe mode when: - **Security is involved** — Auth, payments, user data. Always review this code. - **Others depend on it** — Team members, users, production systems - **Money is at stake** — API costs, billing, financial calculations - **It's hard to undo** — Database migrations, destructive operations - **You need to maintain it** — Code you'll live with for months or years For these cases, shift to guided AI development: Claude implements, you review every change, you understand what's happening. ![Is This a Pigeon meme - developer pointing at production code asking "Is this vibe coding?"](/assets/blog/vibe-is-this-pigeon.jpg) ## The Conductor Mindset Even when not full-vibe coding, the core insight applies: **you're the conductor, not the instrumentalist**. Traditional coding: You write every line. AI-assisted coding: You direct the orchestra. Claude handles the fingering, you decide what music to play. [code block] This shift is why planning matters more, not less. Without clear direction, Claude will make reasonable-seeming decisions that don't fit your architecture. ## The Paradox: More Planning Required Here's what surprises people: AI-assisted development requires MORE upfront planning, not less. Without planning, Claude will: - Create inconsistent patterns across files - Miss architectural constraints you haven't stated - Duplicate functionality that already exists - Make hard-to-reverse decisions ### What Good Planning Looks Like [code block] Claude is incredibly good at implementation. It's less good at strategic decisions about architecture. Play to strengths: you architect, Claude implements. ![Epic Handshake meme - You and Claude agreeing on clear planning upfront](/assets/blog/vibe-epic-handshake.jpg) ## Effective Direction The difference between frustrating AI sessions and productive ones often comes down to how you direct. **Too vague:** [code block] No constraints, no architecture, no patterns. Recipe for misaligned output. **Too controlling:** [code block] You're writing code with extra steps. Let Claude code. **Just right:** [code block] Clear goal, clear constraints, room for Claude to work. ## Trust But Verify Karpathy can vibe code because he immediately sees if the code works and can throw it away if it doesn't. For anything beyond prototypes, add verification. **For implementation:** [code block] Forces Claude to articulate its reasoning. Catches misalignments early. **For security-sensitive code:** [code block] Makes Claude your first-line security reviewer. **For complex logic:** [code block] Catch bad approaches before they become bad code. ## The Iteration Mindset Vibe coding is inherently iterative. First pass won't be perfect. That's fine. [code block] [code block] [code block] Fast iterations beat perfect first attempts. Claude is cheap and fast. Use that. ## Anti-Patterns ### The Never-Verifier [code block] Pure vibe coding for production. This is how you get security vulnerabilities and broken edge cases. ### The Micromanager [code block] You're writing code with extra steps. Either trust Claude to implement, or write it yourself. ### The Context Hoarder [code block] Stale context degrades output quality. Clear often, especially between tasks. ### The Vague Dreamer [code block] Too ambitious, no constraints, no architecture. You'll get something, but not what you wanted. ## Finding Your Flow The developers who get the most from Claude Code share some patterns: **They clear often.** Fresh context for each task. See [Part 2](/blog/claude-code-mastery-02-mental-model). **They plan first.** Architecture decisions before implementation. See [Part 3](/blog/claude-code-mastery-03-project-configuration). **They use commands for repetition.** Codified workflows, not repeated prompts. See [Part 4](/blog/claude-code-mastery-04-custom-commands). **They verify security.** Always review auth, payments, data handling manually. **They iterate quickly.** Fast feedback loops, small adjustments, not perfect first attempts. **They know when to zoom out.** When Claude is looping, they add context or use `ultrathink`. ## The Speed Mindset Once you internalize the conductor role, something shifts. You start thinking in terms of what to build rather than how to build it. An experienced vibe coder might think: [code block] Then tell Claude exactly that. Implementation becomes a detail—important, but not where your cognitive load goes. This is the productivity unlock. Not that Claude writes code for you, but that you can operate at a higher level of abstraction while still shipping working software. ## Series Complete You've now covered the full Claude Code toolkit: 1. **[Getting Started](/blog/claude-code-mastery-01-getting-started)** — Installation, first commands, the mindset 2. **[Mental Model](/blog/claude-code-mastery-02-mental-model)** — How Claude thinks, context management 3. **[Project Configuration](/blog/claude-code-mastery-03-project-configuration)** — CLAUDE.md and settings 4. **[Custom Commands](/blog/claude-code-mastery-04-custom-commands)** — Slash commands for workflows 5. **[Skills](/blog/claude-code-mastery-05-skills)** — Auto-loading knowledge modules 6. **[Subagents](/blog/claude-code-mastery-06-subagents)** — Parallel specialized workers 7. **[MCP Servers](/blog/claude-code-mastery-07-mcp-servers)** — External service integration 8. **[Production Workflows](/blog/claude-code-mastery-08-production-workflows)** — GitHub Actions, team patterns 9. **[Power User Secrets](/blog/claude-code-mastery-09-power-user-secrets)** — ultrathink, headless mode, hidden features 10. **Vibe Coding Philosophy** — The mindset that ties it together The tools are just tools. The mindset—knowing when to vibe, when to verify, when to plan, when to let Claude run—that's what makes the difference. Now go build something. --- --- # Building SmallShop Part 1: Laying the Foundation URL: /blog/building-smallshop-progress-report Published: 2026-01-14 Author: Jo Vinkenroye Tags: AI, E-commerce, Next.js, Side Project, Automation, Vercel Series: Building SmallShop (Part 1 of 3) --- progress report on building an ai-powered alternative to shopify for small shop owners who just want to sell stuff online So I was helping a friend set up her online boutique on shopify. Should have been simple right? Haha Menus within menus. Diy themes. Configurations everywhere. Plugins here and there. Before you even launch your small online boutique you've already spent like a week just setting it up And then there's the cost. It's not just the base fee. It's the monthly subscription plus commission on every purchase plus cost per plugin. For a small boutique owner who just wants to sell jewelry or vintage clothes it's way overkill ## The Idea I figured in the age of ai I could build something simpler. Something for small shop owners who don't need much. No headache, no hassle. Just let me create your shop and pay a low monthly subscription If I could convert enough small shops I'd have monthly recurring revenue. Simple business model :) ## The Pricing Pivot My initial idea: €200 for design and setup, then €20/month. I thought €200 was already pretty low for custom web design, but doable with tools like claude code and vercel these days My friend laughed. She told me none of her friends who own businesses would pay €200 upfront. Not even close Ok back to the drawing board ## Going Fully Automated So I thought, if people won't pay for manual setup, what if there was no manual setup at all? What if I could build the website automagically, and do the dns for private domains automatically using cloudflare apis That's the route I took ## The Architecture I analyzed the general layout of most shops. They basically all have: - landing page - all products page - collection page - product by id page Then I looked at what sections are common. A landing page needs a hero, call to action, features list, product grid, collections grid. Based on these blocks I could build most pages using ai in a structured way ### 18 Themes and Styles Different shops need different vibes. Jewelry store wants something luxurious. Clothing store might want something more vintage or boutique-y. So I built 18 themes and styles users can pick to make their store feel unique ### The Configuration System Based on these parameters each shop gets: - different pages - different layouts or sections per page - different variants per section - global theming and styling This gives enough flexibility to create unique shops from the start. And users can afterwards easily change their style, move sections around, pick a different variant. Whatever they want ## The Onboarding Flow During onboarding the entire shop gets generated using ai. Two routes: **Existing website**: user enters their old website url. I scrape it and extract all the product info, images, layout, design, styles and pages. This goes through an ai pipeline that decides which sections to use, which design style fits, and which pages to create **No existing website**: user goes through a stepper to fill out info, upload images, logos, and add their initial products ## The AI Pipeline Once onboarding starts the pipeline: 1. Uses extracted info to generate sections 2. Chooses which variants of sections to use 3. Decides on the global design style 4. Imports all products and collections 5. Sends products through google's nano banana for ai image generation to create hero banners and better product images ## Current Status I've got the smallshop landing page done. The idea is that the entire store gets generated automatically so the user only needs to manage their products. That's it But ai can't always get everything perfect. So I also built a theme editor as a fallback. Users can move sections, change the design, update the ai generated copywriting or imagery if they want to. It's an afterthought, not the main experience ![theme editor](/assets/blog/smallshop-theme-editor.png) Still working on: - the full onboarding flow - the ai pipeline (this is the tricky part) The ai pipeline takes time to fine-tune because ai doesn't have deterministic output. Every tweak requires running through the cycle again and testing. It's slow but necessary ## Lessons Learned After [globalpetsitter](https://www.globalpetsitter.com) I learned that making a good product is not enough. You first have to get traction So with smallshop I'm trying something different: - **undercut major platforms on pricing** - make it a no-brainer - **offer a free ai-generated preview** - let them see their shop before paying anything - **hook them with the result** - if the ai-generated website impresses them they're more likely to convert The goal is to remove all friction. No upfront payment, no manual setup. Just "here's what your shop could look like" and then a simple monthly subscription if they want to keep it Still early days but the foundation is there. More updates as I make progress :) --- --- # Claude Cowork: the ai coworker we didn't know we needed URL: /blog/claude-cowork-the-ai-coworker-we-didnt-know-we-needed Published: 2026-01-14 Author: Jo Vinkenroye Tags: Claude, AI Agents, Anthropic, Productivity, Automation --- Anthropic's new Cowork brings AI agents to everyone, not just developers. Here's why it matters and what the community really thinks about it. So anthropic just dropped something pretty interesting. Claude Cowork. And I think it might be a bigger deal than people realize It's basically Claude Code but for everyone. Not just us devs ## what is cowork actually Ok so here's the deal. You give Claude access to a folder on your computer and it can read, edit, create files. All on its own. No terminal, no command line, no coding needed The examples they showed are kinda boring on purpose. Organizing your messy downloads folder, turning receipt screenshots into spreadsheets, drafting reports from random notes. Mundane stuff. But that's the point right Boris Cherny who made Claude Code said people were already using it for non-coding stuff. Vacation research, slide decks, cleaning up email, even recovering wedding photos from old hard drives. So they just made it official :) ## what people are saying The reactions have been all over the place. Pretty entertaining actually ### the believers [@hussamfyi on X](https://x.com/hussamfyi/status/2010848188370956545) had a good take: > "Claude cowork is a useful reminder that the wrapper is the product. Tech twitter might have you convinced a terminal is the ideal form factor, but packaged experiences with ready-to-use presets is what people want/will use." And [Raiza Martin](https://x.com/raizamrtn/status/2010840391722135968) was pretty hyped: > "It's an amazing research partner, data analyst, even a second brain when I can't quite remember something. The ability to connect to your files, external sources, and all of the goodness of Claude is really as close to AGI as I've ever felt." Alexis Ohanian just wrote "This is big." fair enough haha ### the skeptics Claire Vo from ChatPRD had some criticism. Said it shows too much of its internal process for regular users but also limits flexibility for power users. Though she admitted it still works better than normal chat Karthik Hariharan was more direct: "In general, my feeling is code always wins in the end." I mean, maybe. We'll see ### the worried ones Simon Willison wrote a [really good piece](https://simonwillison.net/2026/Jan/12/claude-cowork/) about the security side. Anthropic tells users to "monitor Claude for suspicious actions" and he's like... That's not realistic for normal people > "I do not think it is fair to tell regular non-programmer users to watch out for 'suspicious actions that may indicate prompt injection'!" He's right. The people cowork targets are exactly the ones who won't spot when something's going wrong. And yeah some users have already lost files from bad prompts :( ## the startup killer thing Ok this is where it gets spicy. [Fortune ran a piece](https://fortune.com/2026/01/13/anthropic-claude-cowork-ai-agent-file-managing-threaten-startups/) saying cowork could threaten dozens of startups And I mean... It's true? File organization, document generation, data extraction. There are so many vc-backed startups doing exactly this. When anthropic bundles it all for $100/month your runway looks a lot shorter But also, deep domain expertise and good ux can still win. Generic file management isn't the same as specialized workflow tools. Still though, the pressure is real ## my take Been watching this space for a while now. Here's what I actually think matters: **Psychology is the product.** [Zvi Mowshowitz](https://thezvi.substack.com/p/claude-coworks) nailed this. Command line vs chat interface is mostly just perception. Both handle text, both can run code. But one feels like talking to someone and the other feels like bash scripts. That shift could be huge **The speed is the real story.** They built cowork in like 10 days. Using Claude Code. So yeah we're in a recursive loop where ai builds ai tools now. Pretty wild to think about **$100-200/month is a filter.** This isn't for casual users. It's for people who can justify the cost through time saved. Says a lot about where anthropic sees the value **Security concerns are valid but overblown.** Yes prompt injection is real. Yes regular users won't catch it. But the sandbox gives better default protection than most of us give ourselves anyway. And waiting until it's "perfectly safe" means never shipping anything ## what happens next Google and OpenAI will follow. That's just inevitable at this point. The race to own the ai coworker space is on For us devs the interesting question isn't whether to use these tools. We already do. It's how to build things that complement them instead of competing. The wrapper-app era might be ending but the platform-extension era is just starting Cowork is a research preview. Mac only. Rough edges. But it's also the most real glimpse of what ai-augmented work will feel like for normal people Honestly it feels less like the future arriving and more like the present finally catching up :) --- **Sources:** - [TechCrunch](https://techcrunch.com/2026/01/12/anthropics-new-cowork-tool-offers-claude-code-without-the-code/) - [VentureBeat](https://venturebeat.com/technology/anthropic-launches-cowork-a-claude-desktop-agent-that-works-in-your-files-no) - [Simon Willison's First Impressions](https://simonwillison.net/2026/Jan/12/claude-cowork/) - [Zvi Mowshowitz Analysis](https://thezvi.substack.com/p/claude-coworks) - [Fortune](https://fortune.com/2026/01/13/anthropic-claude-cowork-ai-agent-file-managing-threaten-startups/) - [Anthropic Official Announcement](https://claude.com/blog/cowork-research-preview) --- --- # Automating Blog Posts with Claude Code Skills URL: /blog/automating-blog-posts-with-claude-code-skills Published: 2026-01-13 Author: Jo Vinkenroye Tags: Claude Code, Automation, AI, Developer Tools --- i built a custom skill that turns writing blog posts from a 30-minute chore into a 30-second command I built a custom claude code skill that turns writing blog posts from a 30-minute chore into a 30-second command ## The Problem Every time I wanted to write about a project, I'd go through the same tedious steps: - find screenshots in the project folder - copy them to the blog assets directory - create a new mdx file with the right frontmatter - write the content - remember the correct category and tag format - commit everything - start the dev server to preview It's not hard. But it's friction. And friction kills momentum ## Claude Code Skills 101 Skills live in `~/.claude/skills/` as markdown files. Each file defines a slash command that claude code can execute. The structure is simple: [code block] When you type `/blog` in claude code, it loads the corresponding skill file and follows the instructions ## Anatomy of a Skill Here's the structure of my blog skill: [code block] The key thing here is `$ARGUMENTS`. This is how you pass input to a skill. Whatever comes after `/blog` gets injected there ## Chaining Skills Together The blog skill calls another skill: `/rewrite`. This converts my drafts into my writing style [code block] Skills calling skills. It's composable automation ## The Execution Flow When I run `/blog ~/Sites/hyperscalper`: [code block] All from one command ## The MDX Output The skill knows exactly what frontmatter my blog needs: [code block] Categories, tags, image paths. All defined in the skill so I don't have to remember ## The Meta Part This post was written using the skill I'm writing about. /blog about the blog skill :D ## Building Your Own Start simple. Create `~/.claude/skills/your-skill.md` with: [code block] Think about what you do repeatedly: - reviewing prs → `/review` skill with your checklist - setting up projects → `/init` skill with your preferred stack - writing docs → `/docs` skill with your format - posting updates → `/tweet` skill with your voice ## The Stack My blog runs on next.js 14 with mdx. Posts are just files: [code block] No database. No cms. Just markdown and git. The skill writes directly to the repo ## What's Next I'm thinking about adding: - `/imagen` integration for auto-generating cover images - pulling in relevant code snippets automatically - cross-posting to dev.to with `/crosspost` But for now, it's already saving me time and—more importantly—removing the friction that stopped me from writing in the first place --- --- # I Built a Tool to Generate Video Ads with AI URL: /blog/building-ad-forge Published: 2026-01-13 Author: Jo Vinkenroye Tags: AI, Video Generation, Next.js, Fal.ai, Automation --- ad forge collapses the entire video ad creation workflow into one ai-powered pipeline. describe your concept, get a finished video So I run globalpetsitter.com. Connecting pet owners with sitters around the world. Like any startup I need promo content, specifically video ads for social media. The problem is making even a simple 30 second ad takes forever ## The Problem Every time I wanted to make a video ad the process was something like: 1. Write a script or storyboard 2. Find or create images for each scene 3. Generate or record voiceover 4. Convert images to video clips with motion 5. Sync audio with video 6. Edit everything together in capcut Each step needs different tools, different logins, tons of context switching. For a 6 scene ad we're talking hours. And if I didn't like the result? Start over ## Ad Forge I built ad forge to collapse all of this into one pipeline. Describe your ad in plain text, let ai do the heavy lifting Here's what the output looks like: