0 → 1 in one week by one person
AI EngineeringLast Monday I had a company name and a co-founder conversation. By Sunday I had 27 deliverables: a working AI product, an investor pitch deck, a partners deck, a revenue model, a PRD, a course database, a brand identity, a waitlist landing page, an Instagram presence, a digital marketing strategy, a demo video, a one-pager for friends-and-family, a one-pager for partners, a golf tourism deck, a system diagram, an operating model, company principles, a culture doc, an SEO strategy, target user feedback, a challenges-and-mitigations doc, and six custom Claude Code skills. Company incorporated. Domain live. Branded email working.
One person. One week. Claude as the amplifier.
This isn't a story about AI writing code. The working MVP was one deliverable out of twenty-seven. This is about how I treated the entire 0-to-1 as a single coherent sprint (product, commercial, engineering, brand, marketing, leadership) and why the upfront hours I spent writing before building anything were the reason the whole thing held together.
What MonarchLinks is
MonarchLinks is a planning tool for golfers visiting Scotland. The problem: a decent Scottish golf trip takes 15+ hours to plan. You're cross-referencing tee times across a dozen courses, figuring out which ones are close enough to play on the same day, guessing at green fees that change by season, and trying to work out whether the A9 north of Inverness is going to add two hours to your drive.
The product runs a wizard to capture preferences, hands off to a Claude-powered agent that calls tools (search courses, get pricing, calculate travel times), and builds a priced day-by-day itinerary. The user watches it assemble in real time, then refines conversationally. There's a sharing system for group trips, Supabase auth, and a React Native mobile companion app.
The live MVP is at https://monarchlinks-production.up.railway.app/.
But the MVP was maybe 30% of the week's work. The rest was everything you need to actually take a product to market.
The 27 deliverables
I tracked them across six domains:
Commercial. Company incorporation, revenue model, investor pitch deck, partners pitch deck, API partners deck, one-pager for friends-and-family, one-pager for partners, golf tourism deck, key contacts list.
Engineering. Working MVP (React + FastAPI + Claude agent + Supabase), course database, system diagram, Claude.md, six custom Claude Code skills, domain and branded email.
Marketing and branding. Brand identity and style guide, Instagram presence, waitlist landing page, digital marketing strategy, SEO and GXO strategy, demo support pack, demo video.
Product. PRD, target user feedback, challenges and mitigations.
Leadership. Principles, culture, operating model.
Twenty-seven items. Some took an hour. Some took half a day. None were throwaway. Each one had to be good enough to put in front of an investor, a partner, or a user, because that's the point of 0-to-1.
You don't get to ship rough drafts to a course operator and say "we'll tighten this up later."
Why I spent Day 1 writing, not building
I didn't open a code editor until Day 2. Monday was a writing day.
I wrote three documents that everything else flowed from: a PRD, a CLAUDE.md file, and a brand style guide. Together they ran to about a thousand lines. No working software. No slides. No landing page. Just a super clear and well articulated vision.
The PRD defined what MonarchLinks is and isn't. "Not a tour operator. Not trying to replace Airbnb." It defined the personas, the business model, the partnership architecture, the UX principles. It answered the questions that, left unanswered, eat hours later: how does auth work? What's the agent's fallback when it fails? Do we charge courses or earn from tee time platforms?
The CLAUDE.md file is the configuration that Claude Code reads at the start of every session. Mine ran to 300+ lines. Architecture decisions locked: FastAPI on Railway, no Next.js, raw Anthropic SDK (no LangChain, no CrewAI), Supabase for auth, in-memory caching, no Redis. Coding conventions: Pydantic models use camelCase aliases, tool returns capped at 200 tokens, error messages warm not robotic. Design principles: one clear action per screen, progressive disclosure, no dead ends.
The brand style guide defined colour palette, typography, tone of voice, photography direction, logo usage, copy patterns. Deep Navy #1B1B3A as the anchor. Serif headings with wide letter-spacing. "Confident but not arrogant. State facts plainly."
The ripple-down effect
Here's what "writing first" actually bought me in practice.
Day 4, building the frontend. I said "build the auth screen" and Claude generated a component with #1B1B3A for the background, CormorantGaramond_400Regular for the heading, wide tracking on the title, and the brand strapline as the subtitle. I didn't specify any of that in the prompt. It was in the style guide, loaded via CLAUDE.md. It just happened.
Same when I wrote the pitch decks. The brand voice was already defined ("concise, knowledgeable, warm"), so the investor narrative didn't read like a different company from the product copy. The one-pagers pulled the same revenue model figures from the PRD. The golf tourism deck used the same market sizing. Consistency came free because the source material existed.
Pydantic models with camelCase aliases? Every API endpoint serialised correctly on the first try. The frontend TypeScript interfaces matched. Zero time debugging JSON key casing across the entire stack.
This is the ripple-down effect. A decision documented once propagates everywhere, silently. It doesn't just save time in the session where you'd have debated it. It saves time in every future session where you would have debated it but now don't even think about it, because the decision is already loaded into context.
The CLAUDE.md file isn't documentation. It's a multiplier. Every hour I spent on it early bought back 5-10 hours later.
How I actually worked
The raw workflow for engineering was a three-phase loop: Research, Plan, Implement. Human review between each phase.
Research meant Claude explores the codebase and reports back: "here's how auth works, here are the files, here's what would need to change." Plan meant Claude proposes a step-by-step approach and I review before any code is written. This is where bad ideas die cheaply. A wrong plan costs 5 minutes, a wrong implementation costs an hour. Implement meant Claude writes code while I verify each step.
The thing I didn't expect: research mistakes cascade the worst. If Claude misunderstands the codebase in the research phase and that gets into the plan, the implementation is doomed.
For non-engineering deliverables, the pattern was similar but looser. For the pitch deck, I'd write the narrative structure and key claims, then have Claude generate slide content within that frame. For the marketing strategy, I'd define the target segments and channels, then Claude would flesh out the tactics. I stayed in the architect-editor seat. Claude did the drafting.
I also built six custom "skills," reusable prompt packages. /brand evaluated any output against the style guide. /craft scanned for code quality issues. /ai-writing-check flagged copy that sounded machine-generated. /persona-feedback tested UI decisions against the 12 user personas. These meant quality wasn't a one-off review at the end. It was a repeatable gate I could run at any point.
What went wrong
Not everything went smoothly, and the failures are more instructive than the wins.
On Day 6 I scaffolded the entire React Native mobile app: six screens, tab navigator, chat with streaming, push notifications, booking flows. It compiled. TypeScript passed. I opened it in the browser - blank white screen.
Three things had failed simultaneously. NativeWind (a Tailwind-to-React-Native bridge) was silently broken on web, so every component rendered with zero height, invisible. The Supabase client was initialised with empty strings because the mobile app didn't have its own .env file, so the library threw at import time and crashed the module tree. And expo-notifications was imported at the top level of a file in the render path, throwing on web where no native notification module exists.
This was the cost of speed without verification. Claude generated working TypeScript that passed every static check. But "compiles" and "renders" are different things, and I hadn't opened the app untilmeverything was wired together. That's on me.
The lesson: the CLAUDE.md was thorough for the backend. I locked "FastAPI, no Next.js, raw Anthropic SDK" and it held. But I didn't apply the same rigour to the mobile stack. "No NativeWind, use StyleSheet" would have been one line and would have prevented the entire problem. The architecture decisions section was incomplete, and I paid for it.
What I'd tell someone attempting this
Start with the documents, not the code. It feels unproductive. It's the most leveraged thing you'll do all week. Every decision you write down clearly is a decision you won't re-make in 20 different sessions.
Be specific when you do. "Use navy for CTAs" is good. "CTA buttons: Deep Navy fill #1B1B3A, white text, 8px border-radius, uppercase with 2pt letter-spacing" is better. Specificity compounds. Vagueness drifts.
And verify at every boundary. Run the app after every screen, not after all six. Deploy after every feature, not after the sprint. "It compiles" means nothing if you haven't seen it render.
The less obvious one: treat non-engineering deliverables with the same rigour. The pitch deck isn't a distraction from the product. It's the reason someone funds the product. The brand guide isn't a nice-to-have. It's the reason every screen, every deck, every email looks like they came from the same company. At 0-to-1, coherence is your scarcest resource.
What this changes
The interesting thing about this week isn't that I shipped fast. It's what I shipped.
Twenty-seven deliverables across six domains isn't a developer sprint. It's a company sprint. Product, commercial, engineering, brand, marketing, leadership. The full surface area of taking something fromnnothing to launchable.
I'm not a solo developer who happens to use AI. I'm a solo founder who directs an AI system. The skill that mattered wasn't writing code. It was writing specifications (PRDs, brand guides, architecture docs) clearly enough that a language model could execute them without drift across dozens of sessions.
That's the shift. The returns on clear, opinionated writing are now 10x what they used to be. The downstream consumer isn't a human collaborator who fills in gaps with experience and intuition. It's a model that follows your instructions faithfully, including the bad ones. Garbage in, garbage out has never been more literal.
The PRD, the CLAUDE.md, and the style guide aren't overhead. They're the product, they're essential to ensure that the customer remains at the core of the product. Everything else is execution.