I stopped writing code by hand. Here is my workflow in 2026.

My workflow with Claude Code: plan mode, spec-driven development, and why AI agents replaced autocomplete. Real examples from building Telegram Mini Apps, analytics dashboards, and automating everything.

I wrote a post on Habr about coding with AI agents. It got 59,000 views. The comments were wild. Half the people said I was delusional, the other half DM'd me asking for my exact setup.

So here it is. Everything I do, how I actually work, and why I haven't opened a file in my editor to write code from scratch in months.

What an AI agent actually is

Let me clear up the biggest misconception first. When people hear "AI coding", they think of GitHub Copilot. That gray text that appears while you type. Autocomplete on steroids.

That is not what I'm talking about.

An AI agent is fundamentally different. It's a while True loop. The agent gets a task, picks a tool, executes it, reads the result, decides what to do next, picks another tool, and keeps going until the task is done. The tools are real: read files, write files, run shell commands, browse the web, call APIs.

Geoffrey Huntley wrote the best technical explanation of this pattern at ghuntley.com/agent. If you want to understand the architecture, read that. The short version: Copilot suggests the next line. An agent builds the entire feature.

Think of it this way. Copilot is like a passenger who sometimes points at road signs. An agent is a driver who knows the destination, reads the map, and turns the steering wheel.

Why Claude Code and not Cursor or Copilot

I've tried them all. Cursor, Copilot, Windsurf, Cline, Aider. I settled on Claude Code for one reason: it's a CLI tool that runs in your terminal with zero UI between you and the agent.

No fancy IDE. No split panes. You open your terminal in the project directory and type claude. That's it. You're talking to an agent that can see your entire codebase, run your tests, check your git history, and browse documentation.

The tools are built in:

  • File system: reads and writes any file in your project
  • Bash: runs any terminal command (npm install, pytest, docker-compose, whatever)
  • Web search: looks up documentation, Stack Overflow answers, API references
  • MCP servers: custom integrations (I use Coolify MCP for deployments, Telegram MCP for messaging)

Claude Code is made by Anthropic, the same company that makes the Claude model. First-party is the best party. When new model capabilities drop, Claude Code gets them immediately. Third-party tools are always playing catch-up.

Pricing: $20/month for Claude Pro (good enough to start), $100/month for Claude Max with 5x usage (what I use). Some months I burn through the Max quota in two weeks. Other months I barely touch it. Either way, it pays for itself in hours saved on the first day. Check anthropic.com/claude for current plans.

The actual workflow: three steps

Here's what my day looks like.

Step 1: talk about it

I start by describing what I want. Not in code. Not in pseudocode. In plain language, like I'm explaining it to a colleague over coffee.

"I need a page that shows the top 50 Telegram bots by monthly active users, sorted by growth rate. Each row should have a sparkline chart. The data comes from our PostgreSQL database."

Claude asks clarifying questions. Should the sparkline cover 30 days or 90? Do I want absolute growth or percentage? Should bots with suspected fake traffic be filtered out? This back-and-forth takes two or three minutes. By the end, we both understand what we're building.

Step 2: plan mode

I press Shift+Tab to switch Claude into plan mode. In this mode, the agent reads your codebase but doesn't change anything. It explores your file structure, reads your existing components, checks your database schema, and then proposes an approach.

"I'll create a new API endpoint at /api/top-bots, query the mau_history table, compute growth rates in SQL, add a React component using the existing SparklineChart from your chart library, and wire it into the reports page."

I read the plan. Sometimes I agree. Sometimes I say "no, use a server component instead of a client fetch" or "we already have a utility for this in utils/mau.ts". We iterate on the plan until it makes sense.

Step 3: execute

I press Enter. Claude starts working. It creates files, writes SQL, builds components, runs the dev server, checks for TypeScript errors, fixes them, runs the linter, fixes that too. I watch the terminal scroll. Most tasks take between 30 seconds and five minutes.

When it's done, I review the changes with git diff. If something looks off, I tell Claude what to fix. If it's good, I commit.

That's it. Discuss, plan, execute, review. Repeat.

Context management: the part everyone gets wrong

The number one mistake people make with AI agents is having one enormous conversation that goes on for hours. The context window fills up. The agent starts forgetting things you said earlier. Quality degrades.

My solution: specs beat conversations.

Instead of keeping everything in one chat, I write specifications as markdown files and store them in the project. Every project I work on has a CLAUDE.md file in the root. This is the first thing the agent reads when you start a new session. Mine includes project structure, coding conventions, deployment instructions, and links to relevant documentation.

For bigger features, I create spec files. ai-docs/pseo-implementation-plan.md, ai-docs/architecture.md, reports-v2-spec.md. These are written once (sometimes with Claude's help) and then referenced across many sessions.

The workflow becomes: write a spec, close the chat, open a fresh chat, and say "read the spec at ai-docs/feature-x.md and implement phase 2". The agent reads the spec, understands the full context, and starts working with a clean, focused context window. I wrote more about this approach in my piece about using Obsidian as a second brain with Claude Code.

Real examples from my projects

Converting a Copilot user in 30 minutes

A colleague had been using GitHub Copilot for a year. Good developer, comfortable with autocomplete, skeptical about agents. I sat next to him, installed Claude Code on his machine, and said "describe something tedious you've been putting off."

He wanted to automate a weekly report that he'd been building manually in Google Sheets. Pull data from an API, calculate some metrics, format it, email it to his team.

He described it to Claude in plain Russian. Claude asked three questions. Then it wrote a Python script, tested it, and the thing worked. Thirty minutes from installation to a running automation. No Stack Overflow tabs. No copy-pasting boilerplate.

He texted me the next day: "I automated two more things last night." That's the conversion moment. Once you feel the speed difference, there's no going back.

BlockWall: a game built in two evenings

BlockWall is a Block Blast clone that runs as a Telegram Mini App. I built it in two evenings with Claude Code. Not a weekend hackathon with energy drinks and desperation. Two calm evenings, maybe four hours total.

I described the game mechanics. Claude generated the grid logic, the piece rotation, the scoring system. I described the visual style I wanted. Claude built the React components with CSS animations. I said "make it a Telegram Mini App" and Claude added the Telegram WebApp SDK integration, the bot backend, and the deployment config.

Did I write code? Some. I tweaked animations, adjusted colors, fixed a scoring edge case by hand. But the structure, the boilerplate, the wiring between components, that was all Claude. The game is live at @blockwallbot on Telegram.

Data analytics with Remotion: 7 minutes to a video report

This one still blows my mind. I gave Claude a DATABASE_URL pointing to my SwanRate analytics database and said: "Calculate the product metrics you think matter for a Telegram bot analytics platform."

Claude connected to Postgres, explored the schema, wrote SQL queries, computed DAU/MAU ratios, retention curves, growth rates by segment. Standard stuff, but done in minutes instead of an afternoon.

Then Claude said: "Want me to make a video presentation of these results?"

I had just installed the Remotion skill that morning (Remotion is a React library for rendering video programmatically). Claude wrote a Remotion composition with animated charts, transitions between slides, and a title card. Seven minutes after I gave it the database URL, I had an MP4 file on my desktop.

Seven minutes. From raw database to finished video. That's when I realized this isn't just about writing code faster. It's about collapsing entire workflows into a single conversation.

What this changes about being a developer

I used to think in functions and files. Now I think in outcomes. "I need a report page" instead of "I need to create a React component that fetches from this endpoint and renders a table." The agent handles the decomposition.

This doesn't mean you can be clueless about code. You need to read the diffs. You need to catch when the agent picks a bad approach. You need to know enough to say "use a CTE instead of a subquery" or "this component should be server-rendered". The skill shifts from writing code to reviewing code and describing intent clearly.

Some people call this vibe coding. I don't love the term, but the idea is right: you describe the vibe of what you want and the agent handles the implementation details.

I'm also more skeptical of frameworks now than ever. Every abstraction layer is another thing the agent needs to understand. The simpler your stack, the better the agent performs. Vanilla CSS beats Tailwind for agent-generated UI. Plain SQL beats ORMs for data queries. Simple file structures beat clever monorepo configs. I wrote about my full tool setup separately.

Getting started (the 5-minute version)

You don't need a complicated setup. Here's the minimum:

  1. Get a Claude subscription. Go to anthropic.com/claude and sign up for Claude Pro at $20/month. That's enough to start.
  2. Install Claude Code. Open your terminal and run:
    npm install -g @anthropic-ai/claude-code
  3. Open a project. Navigate to any project directory in your terminal.
  4. Type claude and start talking. Describe what you want to build, fix, or change.

That's it. No config files. No API keys to manage. No IDE plugins to install. The agent picks up your project structure automatically.

If you want to go deeper, create a CLAUDE.md file in your project root with instructions about your codebase. The agent reads it every time you start a session. Mine is about 50 lines and covers project structure, coding conventions, and deployment steps.

The honest downsides

I should be straight about the limitations.

Claude Code burns through tokens fast. On complex tasks, a single session can use $5-10 worth of API credits. The $20/month Pro plan will run out quickly if you use it all day. The $100/month Max plan is more realistic for daily use.

The agent sometimes goes in circles. It tries an approach, hits an error, tries a different approach, hits another error, and loops. When this happens, I stop it, explain the problem more clearly, or point it to the right file. Experience helps you recognize when to intervene.

It's terrible at large-scale refactoring across dozens of files in one shot. Break big tasks into smaller ones. "Refactor the authentication module" works. "Refactor the entire backend" does not.

And the obvious one: you're trusting an AI to write code that runs in your infrastructure. Review the diffs. Run the tests. Don't deploy blindly.

Where this is going

Six months ago I was debating whether AI coding tools were worth the subscription. Now I can't imagine working without one. The agent handles the tedious 80% so I can focus on the interesting 20%: architecture decisions, product thinking, and the weird edge cases that actually require a human brain.

I wrote the original version of this article in Russian on Habr (link). The response convinced me that most developers are either already using agents or are about to start. The ones still on the fence are the same people who resisted version control in 2008. They'll come around. The productivity gap is too large to ignore.

If you try Claude Code after reading this, let me know how it goes. I'm genuinely curious about other people's workflows.


Find me elsewhere: X (Twitter) · Telegram · GitHub