How I Built This Website Using AI: A Claude Code + Ralph Story
You're looking at a website that was built almost entirely by AI. Not just generated—architected, tested, polished, and deployed using Claude Code CLI and a methodology called Ralph. And yes, you probably just saw a little pop-up telling you exactly that.
This is the story of how it happened, what I learned, and why I believe this approach represents a fundamental shift in how we build software.
Why Build a Website with AI?
I'll be honest: I could have hired a developer, used a template, or spent weeks building this myself. I've done all of those before. But I wanted to prove something—both to myself and to anyone watching.
If AI can help us write emails and summarize documents, can it help us build real software? Not a toy project. A production website with:
- A complete design system
- 299 automated tests
- Lighthouse scores above 95
- Custom blog system with search and filtering
- Email integrations and analytics
The answer, it turns out, is yes. But not in the way you might expect.
The Tools: Claude Code CLI + Ralph Workflow
This project used two key pieces:
Claude Code CLI is Anthropic's command-line interface for Claude. It lets you have conversations with AI while it can read, write, and execute code directly in your project. Think of it as pair programming where your partner has read every programming tutorial ever written.
Ralph is a workflow methodology created by Geoffrey Huntley. It's not a tool—it's a process for working with AI that makes the collaboration actually work.
The core insight of Ralph is this: AI assistants work best when they have clear specifications, tight feedback loops, and fresh context. Without structure, AI generates plausible-looking code that subtly misses the mark. With Ralph, AI generates code that actually fits your project.
How Ralph Works
Ralph operates in three phases that repeat continuously:
Phase 1: Define Requirements
Before writing any code, you create specification files. These aren't vague user stories—they're precise documents that define exactly what you want.
For this website, I created four spec files:
- design-system.md: Colors, typography, spacing, animations, and explicitly what NOT to do (no Inter font, no purple gradients, no bouncy animations)
- testing.md: Test coverage requirements, mocking strategies, specific test counts per module
- design-validation.md: Rules for automated checking of design compliance
- pre-commit-hooks.md: Quality gates that run before any code gets committed
The specs aren't suggestions—they're the source of truth. When AI generates code, it checks against these specs. When tests run, they validate these specs. The specs are the product.
Phase 2: Planning Loop
Claude reads all the specs and existing code, performs a gap analysis (what's defined vs. what's built), and generates an implementation plan. The key constraint: no code gets written in this phase. Just planning.
This matters because AI tends to jump to solutions. By forcing a planning phase, you catch bad assumptions early.
Phase 3: Building Loop
Now Claude picks the most important task from the plan, implements it, runs tests, and updates the plan. Then the context clears and the loop restarts.
This "fresh context per task" approach is counterintuitive but powerful. Instead of AI accumulating confusion over a long session, each task gets Claude's full attention with clear scope.
The Build: 11 Phases, 2 Months
Here's how the project actually unfolded:
| Phase | What Happened | |-------|---------------| | 1-2 | Foundation: Next.js 14, TypeScript, Tailwind, layout components | | 3-4 | Data layer and shared UI: CSV parsing, Button, Card, Container | | 5-6 | Homepage with 7 sections, secondary pages (About, Projects, Contact) | | 7 | Blog system: MDX rendering, search, category filtering | | 8 | Integrations: Email API, Google Analytics | | 9 | SEO: Sitemap, robots.txt, JSON-LD structured data | | 10 | Polish: Lighthouse optimization, accessibility audit | | 11 | Quality: 299 tests, pre-commit hooks, design validation |
Each phase built on the previous one. Claude didn't try to do everything at once—it worked through the plan systematically.
What Actually Went Wrong
This is the part most AI success stories skip. Here's what didn't work:
The DNS Disaster
When we first deployed to production, the site didn't work. At all. The custom domain showed a GoDaddy placeholder page instead of my website.
The fix was simple (update DNS records to point to Vercel), but Claude couldn't help with this. AI doesn't have access to my domain registrar. I had to debug it myself by checking DNS propagation, verifying Vercel's CNAME requirements, and waiting for records to update.
Lesson learned: AI can build software, but infrastructure configuration often requires human hands.
The Rick Roll Incident
During development, I used placeholder data. The YouTube video section? Placeholder ID dQw4w9WgXcQ. The project links? All pointing to example.com.
Claude dutifully built a beautiful video player that Rick Rolled every visitor. The code was perfect. The data was absurd.
Lesson learned: AI treats all data as legitimate. Placeholder content in production is your problem, not the AI's.
Fighting AI Aesthetics
Left to its own devices, AI loves:
- Inter font (everyone uses it)
- Purple-to-teal gradients (the "AI startup" look)
- Bouncy, playful animations (looks good in demos)
- Generic inspirational copy
I call this "AI slop." It's not bad exactly—it's just forgettable. Every AI-generated website looks vaguely the same.
The fix was explicit in the specs. I wrote rules like:
DO NOT USE:
- Inter or Roboto fonts
- Purple/teal/pink gradient combos
- Bounce or spring animations
- Centered body text paragraphs
This worked surprisingly well. By telling Claude what to avoid, the design became distinctive. Navy and amber. Instrument Sans font. Subtle shadows, not glowing borders.
Lesson learned: AI defaults to the median of its training data. Distinctive design requires explicit constraints.
The Results
After 11 phases and roughly 2 months of part-time work:
- 299 tests passing (target was 50)
- 70%+ code coverage across all modules
- Lighthouse scores: Performance 100, Accessibility 98-100, Best Practices 96-100, SEO 100
- Zero production bugs reported since launch
- 5 main pages fully functional with search, filtering, and dynamic content
The site is live at sharonsciammas.com. You're reading it right now.
What I'd Do Differently
If I started over:
1. Write Specs Before Touching Claude
I initially tried to "just start building" and let requirements emerge. This created churn. Every vague requirement became three rounds of revision.
Now I'd spend the first week writing specs with zero AI involvement. Claude is great at implementing clear requirements. It's less great at discovering unclear ones.
2. Separate Content from Code Earlier
Data files (videos, projects, speaking engagements) should be real from day one—or explicitly marked as "PLACEHOLDER_DO_NOT_SHIP". I wasted time building features around fake data that had to be rebuilt when real content arrived.
3. Trust the Process More
Early on, I kept interrupting Claude mid-task to add new requirements. This breaks the Ralph loop. The AI loses context, partially completed work creates conflicts, and quality drops.
The workflow works best when you let each task complete fully before adding new ones.
Why This Matters
I'm not claiming AI can replace developers. This project required significant human judgment:
- Deciding what to build
- Defining brand and aesthetics
- Debugging infrastructure
- Providing real content
- Making tradeoff decisions
But AI fundamentally changed the how. Instead of writing every line of code, I wrote specifications and reviewed output. Instead of debugging syntax errors, I debugged conceptual mismatches. Instead of Googling "how to parse CSV in TypeScript," I described what I needed and got working code.
This is a leverage shift. The same effort produced more output. And critically, the output is maintainable—not a tangled mess that only the AI understands, but clean TypeScript with tests and documentation.
Getting Started with AI-Assisted Development
If you want to try this approach:
Start Small
Don't rebuild your company's main product with AI on day one. Pick a side project where mistakes don't matter. Learn the workflow without high stakes.
Write Better Specs
The quality of AI output directly correlates with the quality of your specifications. Vague inputs produce vague outputs. Spend time defining exactly what you want.
Embrace the Loop
Ralph's three-phase loop (Define → Plan → Build) feels slow at first. You want to just start coding. Resist this. The structure exists because it works.
Keep Humans in the Loop
AI generates, humans validate. Always read the code. Always run the tests. Always check the output. AI is a powerful tool, not a replacement for judgment.
The Bottom Line
This website exists because AI made it feasible. Not easy—feasible. I still made decisions, solved problems, and shaped the outcome. But I did it with a collaborator that never gets tired, never forgets documentation, and never complains about writing tests.
Is this the future of software development? I think it's a future. Not the only one. But for projects like this—where one person wants to build something substantial—AI assistance isn't just helpful. It's transformative.
You're looking at the proof.
Want to learn more about the tools mentioned in this post? Check out Claude Code CLI from Anthropic and the Ralph workflow by Geoffrey Huntley.