Strategy

Mar 23, 2026

I'm Just a PM (Who Shipped to Production)

Mar 23, 2026

I'm Just a PM (Who Shipped to Production)

The gap between customer feedback and live code is now measured in hours, not sprints. Backstroke's PM explains the AI-native workflow that made it possible.

Adam Gardner

Director of Product

Adam Gardner

I'm not an engineer. I want to be upfront about that.

I can't debug a memory leak. I don't know what half the terminal output means when something breaks. I once spent 20 minutes wondering why my changes weren't showing up before realizing I hadn't saved the file.

But three weeks ago, I shipped my first pull request to production. And it started with a customer dropping feedback in Slack at 3 p.m.

By Noon the next day, it was live.

The Setup (Or: How I Learned to Stop Worrying and Ask Brian What npm Stands For)

Before any of this was possible, I had to actually get Backstroke running on my machine. I want to be honest about this part: it was overwhelming.

I didn't have a GitHub account. I had to create one, get added to our org and figure out how to copy our codebases locally. Our CTO Brian had to walk me through it essentially step by step. This meant that he spent part of his afternoon explaining that it's npm, not nmp. (I definitely tried nmp first.)

To give you a sense of the complexity: Getting our app running locally meant installing and configuring somewhere around a dozen tools I'd never heard of, getting access to five or six different systems, running database restore commands, fixing permissions with SQL I didn't write and couldn't explain, then somehow getting three separate services talking to each other on the right ports. There was a moment where I was genuinely unsure if I'd broken my Mac.

The honest truth is that without Brian, I wouldn't have gotten through it. And that's not a knock on the docs, because the setup guides written for engineers make assumptions that non-engineers don't know to question. 

My experience did clarify something critical to me: The only way to really understand what you're building is to actually run it. 

The pain was a one-time cost. What I got on the other side was worth it.

What "AI-First PM" Actually Looks Like in Practice

What "AI-First PM" Actually Looks Like in Practice

Once I had everything running, I wanted to figure out what I could actually do with it.

My workflow now looks like this.

Talking first, typing second. This might sound odd, but one of the most useful things I do is just talk. On a walk, in the car, sitting at my desk, I'll use voice mode or Granola to ramble through a problem out loud. 

What's the feature? What's the user's pain? What are the open questions? What am I unsure about? 

Speaking forces a kind of clarity that typing doesn't, and it gets everything out of my head and into a form I can hand to Claude. It's become how I start almost every feature or enhancement.

Claude handles product requirement documents (PRDs). I'll take those rough thoughts, share relevant context—like user feedback, existing behaviors and constraints—and use Claude to draft and pressure-test a requirements doc. Essentially, I'm using it to poke holes. 

What edge cases am I missing? What would a skeptical engineer push back on here? 

The output is better than what I was writing alone, and faster.

Ask codebase questions before bugging anyone. This one's been quietly huge. Before I'd pull an engineer into a conversation, I can now ask Claude questions about our codebase directly. "Do we support X?" or "how hard would Y be?" or "where does Z live?" I don't always understand the full answer, but I get enough signal to know whether something is a quick lift or a real project. It's changed how I come into technical conversations.

Put Claude Code in plan mode before touching anything.

Put Claude Code in plan mode before touching anything. This was the unlock. I'll drop a link to the PRD I wrote in Linear, and Claude Code pulls the full spec directly via our Linear integration. Then I ask it to read the relevant codebase files and map out the implementation before touching anything. This covers what files need to change, what the approach is, what could go wrong. 

Critically, I actually read the plan before letting it go. This is where I catch open questions, identify assumptions that need validating or push back on the approach. Only once I'm confident in the plan do I let it run.

Run locally to test. This sounds simple. It wasn't, at first. Early on, I didn't know I needed to start the servers. I didn't know I needed to push code to a branch. I didn't know a lot of things, and kept having to interrupt Claude mid-implementation to ask basic questions. 

Eventually, I worked with the team to build a single command—backstroke-day—that handles everything like authenticating with AWS, syncing all three repos, clearing any port conflicts and starting the full stack. 

Now, every morning, I type one word and I'm ready to go. That kind of tooling sounds small, but it's what actually makes this sustainable for a non-engineer.

Push a PR for engineer review. I'm not bypassing my team, but changing what I bring to them. Instead of handing off a PRD and waiting, I'm handing off a working implementation they can review, improve and merge. Or they can reject it with specific feedback, which is also useful learning.

The First Real Rep: Font Safety

Here's a concrete, real-life story of this process at play.

One afternoon, one of my teammates—Egan, our VP of Marketing—dropped feedback in our internal #product-feedback Slack channel. This issue had come up with a customer: Our brand config didn't make it clear which fonts were email-safe versus web-only. If you pick a web font for your email, it silently falls back to something generic in most inboxes. Users had no idea this was happening.

It’s a legitimate problem. Not a huge feature, but the kind of thing that erodes trust quietly.

Here’s how I would have used to address it: I write up the solution, it goes into the backlog, competes with everything else and it ships in a few weeks. if we're lucky.

But nowadays: I turned it into a Linear issue. Fed that issue to Claude Code. Told it to read the relevant files and propose an approach. Reviewed the plan. It created a feature branch, wrote the implementation, covering email-safe vs. web font indicators, quick-add sections for safe fonts, fallback warnings when you pick a web font and labels on existing fonts so expectations are set.

I ran it locally. Clicked through it. It looked right. I iterated on some of the copy and UX polish. Pushed the PR.

Michael and the team reviewed it that evening. Merged the next morning.

3 p.m. feedback. Noon the next day, it's live. My first PR ever.

The First Real Rep: Font Safety

What Breaks (The Honest Part)

This isn't a perfect system, and I want to be clear about that.

I still run into things that require an engineer. There are errors I can't interpret, edge cases I didn't anticipate and implementation decisions that need someone with real context to weigh in. The difference now is what happens when I hit those walls. I document what went wrong. I ask Claude to turn the solution into a skill—a reusable set of instructions—so I don't have to hold it in my head or ask the same question twice. I learn from each one, so the next rep is a little smoother.

I also can't assess risk the way an engineer can. I have a lane that works: features where the scope is clear, the UI is the main surface and the stakes of getting it slightly wrong are low. I'm not the right person to be touching anything near payments, authentication or data pipelines. I know that, and I try to stay honest about it.

But here's what's shifted. This way of working has started to change how our whole team thinks. We have a standing weekly meeting—we call it "AI-ify"—where all we talk about is how to bring AI further into our process. How to build shared context across our repos, so agents have what they need. How to create skills that anyone on the team can use. How to compress the loop between idea and implementation at the team level, not just individually. 

The value isn't that I've become a developer. It's that the first 80% of implementation—the translation of a clear product idea into working code—no longer requires one. And that's pushing all of us to think differently about how we build.

What This Changes

There's a version of this that's a story about AI productivity. Faster shipping, fewer handoffs, smaller teams. That's real, and it's worth taking seriously.

But the thing I keep coming back to is what it does to learning velocity.

The bottleneck in most product orgs isn't ideas. It's the time between "here's what I think we should build" and "here's what happened when users touched it." 

Every hour that gap is shorter, you get smarter faster. You stop debating in the abstract and start learning from reality. When I can go from customer feedback to something testable in an afternoon, I find out quickly whether my instinct was right. Not in a sprint review. Not in a quarterly retro. That afternoon.

what this changes for product management in saas

That's the actual bet. Not that AI replaces engineers; it doesn't and ours are still the ones who make sure what I build isn't a liability. The bet is that AI removes the translation layer between insight and iteration. And when that layer shrinks, the whole team learns faster.

Still Just a PM

I still don't fully know what git is doing half the time I run it. I have a .zshrc file that has been edited by at least three different AI models and I'm afraid to look at it. My local setup has a comment in it that just says "don't touch this" and I have no memory of writing it.

But I shipped to production. And I'll do it again!

The gap between "should we build this?" and "what did users think of it?" is now measured in hours at Backstroke. That changes everything about how we work.

If you're a PM who's been told you can't do this stuff . . . I'd push back on that. The tools are genuinely good now. The setup is painful (we're fixing ours), but it's a one-time cost. And the other side of it is worth it.

Just make sure you know it's npm, not nmp.


Adam Gardner is Director of Product at Backstroke, an AI-native email marketing platform for e-commerce brands. This post is part of our monthly R&D blog series.