My AI tools and setup in January 2026

The actual tools I use daily: Ghostty, Spokenly, Descript, ownyourchat, agent-browser, Coolify MCP, and how they fit together. No frameworks, no hype, just a developer's real toolbox.

People keep asking me about my setup. After the Claude Code workflow piece, I got dozens of DMs along the lines of "what else do you use?" and "what's your terminal?" and "how do you deploy stuff?"

So here's the full picture. Every tool I use daily, why I picked it, and what I'd drop if I had to cut the list in half. This isn't a "top 10 apps" listicle. It's the toolbox I actually reach for when building things.

The philosophy: cherry-pick, don't adopt

Before I get into individual tools, I need to explain how I think about tooling. Because the "what" doesn't matter without the "how."

I'm deeply skeptical of frameworks. Not libraries, not individual tools, but the big bundled "install my dotfiles and be productive" packages that people share on GitHub. Someone builds a setup that works for their brain, their projects, their habits. Then they package it up and tell you to install it. Suddenly you're running 40 MCP servers, 15 custom prompts, and a task management system designed for someone else's workflow.

My approach: study what other people build. Understand the ideas behind their tools. Then cherry-pick the specific pieces that solve a problem you actually have, and wire them into your own setup. The result is smaller, simpler, and yours. I wrote about this same principle applied to coding in my vibe coding guide.

"First-party is the best party" is a rule I come back to constantly. Official tools from the platform makers will always be better maintained than community wrappers. When Anthropic ships a feature in Claude Code, it lands in the CLI on day one. Third-party Cursor plugins get it weeks later, if at all. When Telegram updates their Bot API, the official SDK gets it first. This matters.

Terminal: Ghostty

Let's start with the most basic thing. I use Ghostty as my terminal emulator.

I used iTerm2 for years. It's fine. It's also the kind of software where you can spend an entire afternoon adjusting keybindings, profiles, color schemes, and tmux integration. I did that. Multiple times. Every Mac setup, another afternoon gone.

Ghostty is the opposite. You install it and it works. Fast rendering, sensible defaults, no configuration rabbit holes. Font rendering is better than iTerm out of the box. I haven't opened its settings once since installing it.

It's built by Mitchell Hashimoto, the person who created HashiCorp (Vagrant, Terraform, Vault). Free and open source. I mention who built it because it matters for longevity. This isn't some weekend project that'll be abandoned in six months.

If you've ever felt guilty about how much time you've spent configuring iTerm, just switch. You'll forget you made the change within a day, which is exactly the point.

Voice-to-text: Spokenly

This is probably the tool that's changed my daily workflow the most, and it's the one people are least likely to have heard of.

Spokenly runs local speech-to-text on your Mac. You hold the Command key, speak, and the transcribed text appears wherever your cursor is. That's the entire product. No cloud. No account. No subscription. Everything runs on-device using Apple's speech models.

I use it with Claude Code constantly. Instead of typing out a multi-paragraph prompt describing what I want the agent to do, I hold Command and talk for thirty seconds. Talking is roughly three times faster than typing for me, and more importantly, I think more clearly when I speak than when I type. Typing makes me edit as I go. Speaking lets me get the whole thought out first.

A typical workflow: I look at a piece of code, hold Command, and say "this component is fetching data on every render because the dependency array includes the config object which gets recreated each time, wrap the config in useMemo or extract it outside the component." Spokenly transcribes it, Claude Code reads it, and the fix happens in seconds. The bottleneck is no longer my typing speed.

I also use Spokenly for writing. This article started as a voice recording that I cleaned up afterwards. My Telegram posts often start the same way. When I'm on a walk thinking about something, I hold Command and capture the thought before it evaporates.

Free, local, no login required. I genuinely don't understand why more developers don't use speech-to-text tools.

Video editing: Descript

I recently started a YouTube show called "Show Me Your AI Setup" (youtube.com/@danokhlopkov) where I invite people to demonstrate their actual AI workflows on camera. Not marketing demos, not sponsored content, just real people showing how they actually work.

The first episode was with my friend Misha (@og_mishgun in our Telegram community). We recorded a live stream where he walked through his entire setup. The raw recording was over an hour long with dead air, tangents, and the usual mess of a live conversation.

Descript turns video editing into text editing. It transcribes your entire video, and then you edit the transcript. Delete a sentence from the text, and the corresponding video gets cut. Rearrange paragraphs, and the video rearranges. It sounds like it shouldn't work, but it does, and it's fast.

I'm not a video editor. I don't know Premiere Pro or Final Cut. I don't want to learn timeline-based editing for what is essentially a podcast with a webcam. Descript lets me edit video the same way I edit a Google Doc: select, delete, move on. The show wouldn't exist without it, because I would never have started if the editing required learning a professional NLE.

It's paid software. Worth it if you produce any kind of video or podcast content. Not worth it if you don't.

Chat history: ownyourchat

This one came directly from the AI Setup show. During the live stream with Misha, he showed us a project he built: ownyourchat.

The problem is simple. You've had hundreds of conversations with ChatGPT, Claude, Perplexity. Somewhere in those conversations is that one approach to a database migration, or that regex pattern, or the name of that obscure library someone recommended. Good luck finding it through the web interfaces of three different AI services.

ownyourchat syncs all your AI conversations into a local SQLite database. Then you can grep them. Plain text search across every conversation you've ever had with every AI service. It runs locally, your data stays on your machine.

I've used it to find things I forgot I even discussed. Last week I was trying to remember a specific approach to rate limiting that I'd talked through with Claude months ago. Instead of scrolling through the Claude web UI, I ran a search query against my local database and found it in seconds.

It's open source, free, and built by someone I know personally, which means I can bug him directly when something breaks. That last part isn't scalable advice, but the tool is solid regardless.

Browser automation: agent-browser

Claude Code can run shell commands. Shell commands can control a headless browser. Connect those two facts and you get an agent that can interact with websites: scrape data, fill forms, run visual checks, automate repetitive web tasks.

The official way to do this is Chrome MCP, which gives the agent control over a Chrome instance. I tried it. It worked sometimes and crashed other times. The connection felt fragile during longer sessions.

agent-browser is a CLI tool for headless browser automation built by Vercel. More stable for me than Chrome MCP. You give it a URL and instructions, it navigates, clicks, types, and returns results. Since it's a CLI tool, Claude Code calls it directly through bash commands.

I use it for web scraping when data doesn't have an API, for testing web apps after deploying, and for automating form submissions. Not something I use every day, but when I need browser automation, I reach for this instead of writing Puppeteer scripts.

Deployment: Coolify with MCP

My infrastructure: one Hetzner server, Coolify for deployment management. That's it. Coolify is a self-hosted alternative to Vercel/Netlify/Railway. Install it on your server, connect GitHub repos, and it handles builds, deployments, SSL, and environment variables.

The Coolify MCP server connects Claude Code directly to my Coolify instance. Check if production is running, read deployment logs, restart a service, check env vars, all without leaving the terminal.

A real scenario from last week: I deployed a new version of my Telegram bot, users reported it wasn't responding. I typed in Claude Code: "Check the logs for the takopi bot, it seems to be down." Claude called the Coolify MCP, pulled the last 50 lines of logs, found an unhandled exception on startup, and suggested a fix. I applied it, told Claude to redeploy, and the bot was back up. Under two minutes, no browser involved.

This is what MCP servers are for. Not abstract "tool use" demonstrations, but connecting the agent to the infrastructure you actually manage.

Agent infra: takopi (Claude Code via Telegram)

I built a Telegram bot called takopi that lets me interact with Claude Code from my phone. Send a voice message, get a structured response. Send a text message, get my Obsidian vault updated. It's essentially a mobile interface to the second brain system I described in an earlier article.

The use case is mobility. I'm walking somewhere and I have an idea about a project. I don't want to open my laptop. I pull out my phone, open Telegram, hold the record button, and speak for 30 seconds about the idea. takopi receives the voice message, transcribes it, passes it to Claude Code, and Claude updates the relevant project files in my Obsidian vault. When I get back to my laptop, the idea is already filed where it belongs, with tags and cross-references to related projects.

It also works as a quick lookup tool. "What were the tasks I had for SwanRate?" and takopi responds with the current contents of my SwanRate tasks file. "Remind me of the API structure for the digital twin project" and it pulls the relevant overview.

This is not a product I'm selling. It's a personal tool built specifically for my workflow. I mention it because it illustrates a broader point: the best AI tools are the ones you build for yourself, tailored to your exact needs.

Agent infra: telegram-mcp

This is the other side of the Telegram integration. While takopi lets me talk to Claude Code through Telegram, telegram-mcp gives Claude Code direct access to my Telegram account. Read messages, send messages, search conversations, manage channels.

Why would I want an agent controlling my Telegram? Because Telegram is where my community lives. Thousands of subscribers, dozens of daily messages across group chats. Managing that manually means constant context switching.

With telegram-mcp, I say "check if anyone asked a question in the community chat in the last hour" and Claude reads the recent messages and summarizes them. I say "draft a post about the new SwanRate report" and Claude composes it in my writing style and saves it as a draft. I still approve everything before it goes out. The agent drafts, I publish.

I also use it for research. "Find all messages in our group chat where someone mentioned rate limiting" pulls up relevant conversations instantly, no manual scrolling.

What I deliberately don't use

A setup post isn't complete without the things I've tried and rejected.

I don't use tmux. I used to. Complicated config, custom keybindings, session management. Then I realized I was spending more time managing terminal sessions than working. Now I open multiple Ghostty windows. Simple, stateless.

I don't use Cursor. Good product, but it's an IDE with AI bolted on. I prefer the direct conversation in Claude Code. When model and interface are from the same company, integration is tighter.

I don't use community agent frameworks like gas town, beads, or the dozen others that pop up every week. I studied several. Read their prompts carefully. Took specific ideas and put them into my own CLAUDE.md files. Moved on. The pattern: the best ideas from community tools get absorbed into Claude Code itself within weeks. Anthropic watches what works and ships it natively.

I don't use AI wrappers for git. Claude Code already understands git. I don't need a separate tool to write commit messages or manage branches.

How these tools connect

None of these tools exist in isolation. They form a workflow that looks like this:

I wake up, open Ghostty, type claude, and start my day. Spokenly handles input when I want to describe something complex without typing. Claude Code does the building. Coolify MCP handles deployment. telegram-mcp handles community communication. takopi captures ideas when I'm away from my desk. ownyourchat lets me search through everything I've discussed with AI. Descript handles the show.

The key insight is that every tool in this list solves exactly one problem. No tool tries to do two things. Ghostty is just a terminal. Spokenly is just voice-to-text. agent-browser is just browser automation. When a tool tries to be a platform, I get suspicious. Platforms grow complexity. Single-purpose tools stay simple.

The show: "Show me your AI setup"

I mentioned the YouTube show earlier, but it deserves its own section because it directly feeds into how I discover new tools.

The concept: I invite someone, they share their screen, and they show me their actual AI workflow. Not a prepared demo, not slides, but "here's a real task, watch me do it with my tools."

Episode 1 with Misha is where I discovered ownyourchat and several other tools I hadn't considered. Watching someone else work is the fastest way to find gaps in your own setup. You see them do something in two steps that takes you ten, and you immediately want to know how.

I plan to do more episodes. If you have an interesting AI workflow and want to show it on camera, reach out on Telegram. The only requirement is that you actually use the tools daily. No theoretical setups, no "I just installed this yesterday." I want to see the workflows people have lived with long enough to have opinions about.

Subscribe at youtube.com/@danokhlopkov if you want to see the next episode. And before you ask, yes, I edit the videos with Descript.

What changes next month

This setup is a snapshot. It will look different in March. Some tools will get replaced, new ones will appear, and at least one thing I currently love will probably annoy me enough to drop.

The direction I'm moving is deeper integration between the agent and my infrastructure. More MCP servers connecting Claude Code to more services I use daily. The goal is a single terminal session where I can build, deploy, communicate, and research without switching contexts.

The tools themselves matter less than the principle: own your stack, keep it simple, and prefer first-party over third-party. Everything else is details.


Find me elsewhere: X (Twitter) · Telegram · GitHub


Dan Okhlopkov — AI agent practitioner. Building tools for TON Blockchain analysis and Telegram automation.

Telegram · Twitter/X · Instagram · Threads · YouTube