Back to browse
Prompting Claude Code for a Side Project That Actually Ships
VIBE-CODING MEDIUM 4 min
I've started about 30 side projects with AI help in the last year. Eight of them shipped. The rest are in a folder called abandoned on my laptop and I'm not going to open them.
Here's what I've noticed about the eight that actually shipped. None of it is about the AI. It's about how I treat the AI.
Pick projects you can describe in one sentence
If you can't describe your project in one sentence, you're going to confuse yourself before the AI gets a chance to. Good: "A CLI that reminds me to drink water every 90 minutes." Bad: "A productivity platform for knowledge workers." The second one is a startup. The first one is a side project. Only side projects ship on weekends.Start with the smallest useful thing
Every abandoned project in myabandoned folder started with "let me just build the user accounts system first."
Every shipped project started with the actual useful thing. Hardcode everything else.
For the water reminder: step one was "ping me on my laptop every 90 minutes." That's it. No preferences, no database, no UI. Just a script that pops a notification. I used it for a week that way. Then I added features I actually wanted.
Your first prompt should be tiny
Bad first prompt: "Build me a habit tracker app with user auth, database, frontend, mobile support." Good first prompt: "Write a Python script that prints the current time and waits 90 minutes, in a loop." Ship that. Use it. Then extend it. Each prompt from there is small. Why? Because when you prompt too broadly, the AI makes architecture decisions on your behalf. It picks a framework. It picks a database. It names things. You end up with a house someone else designed, and when you want to change something it fights you. When you prompt small, you make all the big decisions. The AI just helps you type.Read every file the first time through
When AI generates 8 files at once, you have to read all 8. I know. It's tedious. Skip this and you get burned later. The most common issues I catch:- Hardcoded values that should be environment variables
- Imports that aren't installed (the AI assumes packages exist)
- Subtle security issues (ORM queries that aren't parameterized, JWT secrets that default to "secret")
- Code that technically works but isn't what you asked for
The "describe the bug" rule
When something breaks, don't paste the error and say "fix this." Instead: explain what you expected to happen, what actually happened, and where in the flow it broke. Bad: "TypeError: Cannot read property 'map' of undefined, fix it" Good: "When I load /books, the list renders fine. When I click Add Book and submit the form, I expect to be redirected back to /books with the new book visible. Instead I get a TypeError on books.map() in BookList.jsx. I think the form submit isn't waiting for the response before redirecting, so books is undefined." The second one gets you a correct fix. The first one gets you a patch that hides the bug.Commit every time it works
This is boring advice but it saved me. Every time a feature works, even a tiny one, commit. Even if it's just "added a button." Because the next prompt might break three things you didn't notice. And you want to be onegit reset away from a working version, not rebuilding from memory.