Vibe coding: how to build products without writing code

What vibe coding is, how to start, and where it works best. I copied Block Blast into Telegram in 2 evenings. Here's how.

A few months ago I built a clone of Block Blast — one of the top-2 mobile games of 2025 — as a Telegram Mini App. Took me two evenings. I didn't write a single line of code by hand. The AI agent handled game logic, canvas rendering, leaderboards, deployment. Everything.

The game is called BlockWall. You can play it right now. And the story of how it got built is the best explanation I have for what vibe coding actually is.

What vibe coding actually means

Andrej Karpathy coined the term in early 2025. The idea is simple: you describe what you want in plain language, and an AI agent writes the code for you. You don't think about syntax, APIs, or implementation details. You describe the outcome. The agent figures out the rest.

This is not autocomplete. GitHub Copilot is autocomplete. You type half a line, it guesses the other half. That's useful, but it's a different thing entirely. Copilot needs you to already know what code to write. Vibe coding doesn't.

Think of it like briefing a developer. You say: "Build me a puzzle game where blocks snap to a grid, completed rows disappear, and there's a score counter." The agent reads that, makes architectural decisions, picks the right tools, writes the code, tests it, and ships it. You watch.

Sometimes you intervene. Sometimes you redirect. But you're working at the level of intent, not implementation.

How to start

Pick something small that you already understand. This is important. If you don't know what the end result should look like, you can't tell whether the agent got it right.

Good first projects:

  • A personal website or landing page
  • A simple bot that does one thing (sends reminders, tracks habits)
  • A clone of something that already exists (this is what I did with BlockWall)
  • An automation script for something you do manually every week

Bad first projects: anything novel where you can't clearly describe the expected behavior. If you're inventing a new product category, you'll spend more time figuring out what you want than building it. Save that for later.

Now, the actual process. You talk to the agent like you'd brief a developer. Be specific about the things that matter. Be vague about the things you don't care about.

For BlockWall, my brief was something like: "I want a Telegram Mini App that's a clone of Block Blast. Grid-based puzzle game. Blocks fall, you place them, completed rows and columns clear. Score tracking. Leaderboard. The visual style should feel like a modern mobile game, not a prototype."

That last sentence matters. Without it, the agent would have given me a functional but ugly grid with no polish. Specificity on things you care about. Silence on things you don't.

Where vibe coding works best

I've noticed a clear pattern after months of doing this daily.

Existing codebases with good documentation work best. If you have a project with a README, structured folders, and clear naming conventions, the agent reads all of that. It understands your patterns and extends them. I've added entire features to projects just by saying "add a settings page that follows the same pattern as the profile page." The agent finds the profile page, studies its structure, and replicates it for settings. Clean.

I wrote about this setup in my Claude Code workflow article and also in my piece on building a Second Brain with Obsidian. The documentation layer is what makes the agent effective.

Green-field projects need more guidance. When there's no existing code to learn from, the agent needs more from you. A spec document helps enormously here (more on that below). Reference designs help too. I often screenshot an existing app and say "make it look like this."

Automations and scripts are the sweet spot for beginners. Converting a colleague from Copilot to Claude Code, the biggest unlock wasn't building apps. It was automating repetitive tasks. Renaming 500 files. Parsing CSVs. Generating reports from a database. These are tasks where the expected output is clear and the agent rarely gets confused.

Voice input changes everything

Here's a workflow detail that sounds minor but changed how fast I work: voice input.

I use Spokenly. It runs locally on Mac, it's free, and the interface is dead simple. Hold the Command key, talk, release. Your speech appears as text wherever your cursor is. Including in the terminal where Claude Code is running.

Why this matters: talking is roughly 3x faster than typing for most people. When you're vibe coding, you're describing things in natural language anyway. So instead of typing out a paragraph about what the next feature should do, I just say it out loud.

"Add a share button to the game over screen. When the user taps it, generate an image of their score and open the Telegram share dialog." That takes me four seconds to say. It would take thirty seconds to type. Over a full coding session, the difference adds up fast.

Is it faster than a brain-computer interface? No. But those don't exist yet and Spokenly does.

The BlockWall story

Let me walk through the actual build. January 2025. Block Blast is everywhere. Number two in the App Store. Everyone's playing it during commutes.

I wanted to see if I could rebuild it as a Telegram Mini App. Not to compete with them, but to test the limits of vibe coding on a real product. Games are hard. They have rendering loops, collision detection, animation, state management. If the agent could handle a game, it could handle anything.

Evening one: I opened Claude Code and described the game. Grid-based puzzle. 8x8 board. Three random block shapes appear at the bottom. You drag and place them. When a full row or column is filled, it clears. Score goes up. Game ends when you can't place any remaining blocks.

The agent built the core game in about an hour. Canvas rendering, touch handling for drag-and-drop, block generation, row/column clearing logic. I played a few rounds. The mechanics were right. The feel was wrong. Blocks snapped too fast. No animation on row clearing. The colors were dull.

So I said: "Add a smooth animation when rows clear. Use a brief flash effect, then fade out. Make the color palette brighter. Add a subtle bounce when blocks snap into place." Another twenty minutes. Now it felt like a game, not a spreadsheet.

Evening two: Leaderboard. Telegram user authentication. A start screen. Sound effects. Deployment to my server via Coolify. Published as @blockwallbot.

Two evenings. Zero lines of code written by hand. And it works. People play it. It has a leaderboard. The agent built all of it.

The spec-driven approach

After BlockWall, I started writing spec documents for every project. This is the single biggest improvement to my vibe coding workflow.

A spec is just a markdown file that describes what you want. Not how to build it. What it should do, what it should look like, what edge cases to handle. Think of it as a product requirements document, but written for an AI agent instead of a human developer.

My specs typically include:

  • One-paragraph summary of the project
  • List of features in priority order
  • Descriptions of each screen or view
  • Technical constraints ("must work as a Telegram Mini App", "use SQLite for storage")
  • Things to avoid ("no external CSS frameworks", "don't add authentication yet")

The workflow then becomes:

  1. Write the spec
  2. Open a fresh Claude Code session
  3. The agent reads the spec and starts building
  4. If something's wrong, I edit the spec, not the code
  5. Start a new session so the agent reads the updated spec fresh

That last step is important. Context pollution is real. If you've been going back and forth with the agent for an hour, it has a lot of conversational baggage. Old mistakes, abandoned approaches, conflicting instructions. Starting a new session with a clean spec gives you a clean slate.

Geoffrey Huntley wrote a great piece on this approach at ghuntley.com/agent that goes deeper into spec-driven development. Worth reading.

When it breaks (and what to do)

It breaks. Regularly. Here's what I've learned about recovery.

Don't try to fix the code yourself. This is the hardest habit to break if you have a programming background. You see a bug, you know exactly which line to change, and your instinct is to open the file and fix it. Resist this. The moment you start editing code manually, you're no longer vibe coding. You're coding. And the agent might overwrite your change on the next iteration anyway.

Instead: describe the bug to the agent. "When I place a block in the bottom-right corner, the row doesn't clear even though it's full." Let the agent find and fix the issue. It will often fix it better than you would because it considers the whole codebase, not just the one line you spotted.

Edit the prompt, not the code. If the agent keeps producing the wrong thing, the problem is usually in your description, not in the agent's abilities. Rephrase. Add examples. Be more specific about the part that's going wrong.

The Ralph Wiggum technique. Sometimes the agent gets stuck in a loop. It tries to fix something, breaks something else, fixes that, breaks the first thing again. You can feel it going in circles. When this happens, stop. Delete the conversation. Open a fresh session. Give better instructions this time based on what you learned from the failed attempt.

I call this the Ralph Wiggum technique because the agent's behavior reminds me of Ralph repeatedly running into a wall. The solution isn't to help Ralph navigate around the wall. The solution is to start over and build the wall somewhere else.

Fresh context, clear spec, new attempt. This works surprisingly well. The agent doesn't carry emotional baggage from the failed session. It just reads the spec and builds.

Codex vs Claude Code

People ask me which tool to use. The answer depends on how you work.

Claude Code runs locally in your terminal. It sees your files, your project structure, your git history. It can run commands, execute tests, start servers. The feedback loop is tight. You say something, it does something, you see the result immediately. This is what I use for active building sessions where I'm iterating fast.

Codex (OpenAI's agent) runs in the browser. You give it a task, it works on it asynchronously. You can close the tab, go make coffee, come back and check the result. This is better for delegation. "Refactor this module to use the new API" is a task you can fire-and-forget.

Different workflows, different tools. I use Claude Code for 90% of my work because I like the tight feedback loop. But for tasks where I know exactly what I want and don't need to watch the agent work, Codex is good enough.

The choice also depends on your comfort level. If you're new to vibe coding, Claude Code's local terminal gives you more visibility into what's happening. You can see every file it creates, every command it runs. With Codex, you see the final result. For learning, visibility matters. Check out my AI tools setup guide for more on how I configure these tools.

What I actually learned

After months of vibe coding daily, here's what sticks.

The bottleneck moved. It used to be implementation speed. Now it's clarity of thought. If I can describe exactly what I want, the agent builds it fast. If my thinking is fuzzy, the output is fuzzy. Vibe coding rewards clear thinking more than any traditional programming ever did.

You don't need to know how to program. But you need to know how to describe what you want. That's a different skill. It's closer to product management than software engineering. You need to understand users, edge cases, priorities.

The $200/month I spend on Claude Max is the best investment I've made in productivity. Not because the agent is perfect. Because it removes the gap between having an idea and having a working prototype. That gap used to be days or weeks. Now it's hours.

And one more thing. Vibe coding is genuinely fun. Not in the way traditional programming is fun (solving puzzles, elegant abstractions). Fun in a different way. You have an idea at lunch. By dinner, it exists. You can show it to people. They can use it. That feeling of building something real, in one sitting, from nothing but a description? It doesn't get old.

If you want to try it: pick something small. Describe it clearly. Let the agent build it. Don't touch the code. See what happens.

You might be surprised.


Find me elsewhere: X (Twitter) · Telegram · GitHub


Dan Okhlopkov — AI agent practitioner. Building tools for TON Blockchain analysis and Telegram automation.

Telegram · Twitter/X · Instagram · Threads · YouTube