On using AI.
I use it every day. Here's what that actually looks like — and what it doesn't change.
When ChatGPT launched in late 2022, the first thing I did was ask it to write a Python script to batch-convert images to WebP. It worked. That was enough to take it seriously.
What changed wasn't that I stopped writing code. It's that I started spending more time thinking about what to build. The mechanical parts — translating intent into syntax, looking up API signatures, writing the same handler for the fifth time — got faster. The interesting parts — architecture, what to prioritise, what not to build — got more time.
The people who use AI best aren't the ones who prompt most cleverly. They're the ones who know what they want clearly enough to describe it. That clarity is a skill. AI makes the gap between a clear idea and working code smaller. It doesn't generate the clarity.
What I actually use
Architecture decisions, edge cases, explaining a confusing API. Useful when the problem is still unclear and I need to pressure-test my thinking before writing code.
Boilerplate, repetitive patterns, typed schemas — but also refactoring across files and tracing how a change ripples through a codebase. It handles both the parts I already know and the parts that need more context than fits in my head. I still read everything it produces.
When a task is repetitive and mechanical, I describe it, get a script, verify it, and file it away. The Photoshop JSX automation from 2022 was the first version of this pattern.
I still debug by reading code. I still review everything that ships. The tools handle translation. Judgment stays mine.
That's a sustainable way to use it. Not as a replacement for knowing what you're doing — as an accelerant once you do.