The Setup That Changed Everything
A colleague tried Claude Code for a week and gave up. "It's slower than writing the code myself." I'd said the exact same thing six months earlier. Now I ship features in half the time with it.
The difference wasn't the tool. It was a one-page markdown file, two shell scripts, and a realization: I was treating the AI like it could read my mind.
I'd write three lines of code and expect it to know my project conventions, my architecture, my error-handling patterns. That's like dropping a brilliant developer into a production codebase with zero onboarding and expecting them to ship by lunch.
It can write any language, understand any framework, work at any hour. But every session, it forgets everything — your codebase, your conventions, what you worked on yesterday. Once I started treating setup as part of the workflow, the results changed immediately.
Context = Onboarding Docs
On a new hire's first day, you hand them a wiki. For AI, this is your CLAUDE.md — a project context file loaded at the start of every session:
# Project Context
## Quick Start
make build # Build the project
make test # Run all tests
make lint # Check code quality
## Architecture Overview
- 4 microservices, each with its own database
- FastAPI for APIs, pytest for tests
- Database migrations use Alembic
## Key Patterns
- All models inherit from BaseModel
- Services have dependency injection
- Tests use Arrange/Act/Assert
- Commit messages follow [TICKET-ID]: brief description
## Important: Code Quality
Run `make lint-verify-local` before every commit.Nothing fancy. One page. The effect is immediate — the AI writes code that matches your project's patterns instead of guessing.
Skills = Runbooks
You don't explain the deploy process from scratch every time someone deploys. You write a runbook.
Instead of retyping the same multi-step process every session, I package it once as a skill:
---
name: commit-workflow
description: Create a git commit with proper formatting
---
## Commit Workflow
1. Review changes: `git status` and `git diff`
2. Check recent commits: `git log -5 --oneline` (for style reference)
3. Draft message format: `[TICKET-ID]: Brief description`
- Extract ticket ID from branch name
- Write in imperative: "Fix bug", not "Fixed bug"
4. Stage files: `git add <specific files>` (never `git add -A`)
5. Create commit: `git commit -m "..."`
6. Verify: `git status`
## Important
- No "Co-Authored-By" lines
- No "Generated with AI" footers
- Do NOT push unless explicitly askedNow I type /commit and the whole workflow runs. Consistent format, correct ticket ID, no accidental git add -A.
Skills can also change how the AI thinks. I built one that stops Claude from jumping straight to code — instead, it presents two or three approaches with tradeoffs before writing anything. One definition, written once, permanently changes the default from "here's the code" to "here's why you might want option A over option B."
That's not a runbook. That's teaching your new hire how to reason about decisions.
Guardrails = Automated Checks
Even a good developer forgets to lint. You don't rely on reminders — you automate it.
I set up hooks that run before every commit:
#!/bin/bash
echo 'Running linter...'
make lint-verify || {
echo 'Linting failed. Auto-fixing...'
make lint-fix
make lint-verify
}The commit doesn't happen until the code is clean. The AI doesn't get to skip it.
I also set up path-scoped rules — testing conventions only apply when Claude touches files in tests/. Migration rules only activate for migration files. You don't brief the backend engineer on CSS conventions.
Agents and Teams = Delegation and Org Design
There's a difference between telling a junior developer each step and handing a senior developer a goal. With an agent, you define an objective — "debug this failing test and fix the root cause." The agent plans, executes, runs tests, checks its own work, and iterates.
When a task is complex enough, one agent isn't enough. Same reason you don't ask one person to do UX, frontend, backend, and QA. I've built workflows where one agent handles database migrations, another builds API endpoints, and another runs tests. What took two hours with one agent runs in 30 minutes with three working in parallel.
The Compounding Effect
This is the part that surprised me most: the system gets better over time without the AI getting smarter.
At the end of each session, I write a structured summary:
# Session: [Date]
**What was done:** Fixed login timeout bug in auth service
**Key decisions:**
- Used caching instead of query optimization (faster to ship)
- Added monitoring to catch regressions
**Learnings:**
- Our auth service hits the database 3x per request
- Need to index the session table
**Follow-up:**
- Next person should investigate bulk operationsThe AI reads these at the start of the next session. Over weeks, the context improves, skills get refined, guardrails catch more edge cases. Week 1 is rough. By week 4, the difference is significant.
Where to Start
- Write a CLAUDE.md file — One page. Project structure, key patterns, common commands.
- Create one skill — Pick something you do daily. Write it as a numbered checklist.
- Add one hook — Something that prevents mistakes automatically. One bash script.
- Use it for a full day — Don't judge it after one prompt. The difference shows up after a few hours, not a few minutes.
Takeaway
The shift isn't about learning AI. It's about applying what you already know about working with people — onboarding docs, runbooks, automated checks, delegating goals instead of dictating steps.
I stopped optimizing my prompts and started optimizing what the AI knows before it writes a single line.