Five Weeks with a Next.js Blog: What Got Built
On February 15 I published Migrating from Ghost to Next.js: A Journey with Claude and Cursor. That post described the initial migration: 26 posts moved from Ghost, a CI/CD pipeline set up, newsletter subscribers ported over.
Since then I kept adding things. Wrote thirteen more posts, wired up cross-posting, added SEO and a bunch of smaller stuff. Figured it's a good time to write down what the blog can do now.
Most of this got built in a pretty relaxed mode. I'd have a free fifteen minutes between work calls, open Cursor, describe what I want to Claude, and switch back to whatever I was doing. Come back later, review the result, maybe adjust, move on. No dedicated "blog development time" in the calendar. Just small slots here and there, accumulated over five weeks.
Traditional SEO
Structured data was one of the first things I asked Claude to add. Every post now generates JSON-LD with BlogPosting schema: headline, author, datePublished, image, keywords. The homepage gets a WebSite schema with SearchAction for sitelinks.
Open Graph tags, Twitter Cards, canonical URLs, per-post descriptions are all pulled from frontmatter automatically. One place to edit, everything stays in sync.
There's a dynamic XML sitemap at /sitemap.xml and a robots.txt. On every deploy, IndexNow pings Bing, Yandex, Naver, and Seznam, and a separate script submits the sitemap to Google Search Console. I don't wait for crawlers to find new posts on their own.
Google Analytics 4 is there too, with custom events tracking visits to llms.txt and content-index.md. Spoiler: five weeks in, not a single LLM crawler has shown up in the logs. The endpoints work, I've tested them manually. The bots just aren't coming. Yet.
LLM SEO
This one I find more interesting than traditional SEO, honestly. I wrote about it separately in Making Your Blog LLM-Friendly: Implementing llms.txt.
The short version: LLMs crawl the web, and if your content isn't structured for them, they'll miss it or get it wrong.
So the blog has https://blog.rezvov.com/llms.txt, a machine-readable index that points to every post as a .md URL. Each post is available as raw Markdown at /{slug}.md with metadata headers. No HTML to parse, no JS to execute.
The Markdown endpoints are open to LLM crawlers but excluded from search engine indexing via robots.txt, so there are no duplicate content issues with the HTML versions.
RSS
Full RSS 2.0 at https://blog.rezvov.com/rss.xml with complete article HTML in content:encoded. No excerpts, no "click to read more". The feed also doubles as a data source for Dev.to's RSS import.
Cross-Posting
Posts now go to three places: the blog itself, Dev.to, and Hashnode. All with canonical URLs back to the original.
Both integrations follow the same pattern: a publishing script with tag mapping (Dev.to has 14+ mapped tags, Hashnode 15+), canonical URL preservation, and a YAML plan file that tracks what's been published where.
On deploy, GitHub Actions runs both scripts automatically. If Dev.to or Hashnode is down, the deploy still goes through. Failures get logged, nothing breaks.
This was a classic "free fifteen minutes" task. I described what I wanted, Claude wrote the Dev.to script, I tested it, moved on. Hashnode came a couple weeks later in the same way.
Newsletter
The newsletter existed from the migration, but it got better over time.
It runs on Mailgun Messages API with subscriber data in JSON files. Two email templates: welcome and post notification, both in Markdown with frontmatter.
The trickiest part was email rendering. There are two paths now: web (responsive YouTube iframes, CSS classes) and email (YouTube thumbnails, everything inlined). Code blocks needed syntax highlighting converted to inline styles because email clients ignore <style> tags. That one took more than fifteen minutes.
On deploy, a script checks newsletter-state.json for unsent posts and sends them. Same post never goes out twice. There's a proper unsubscribe flow with one-click links and soft deletes.
Search
Client-side search with Fuse.js at /search. Searches titles, content, and tags with a 300ms debounce. For 39 posts it works fine. If I ever get to hundreds, I'll think about something server-side.
Comments
Giscus, backed by GitHub Discussions. No database, no moderation tools to maintain. Readers need a GitHub account, which is fine for a technical blog.
Content Tooling
A few things that make the writing workflow smoother.
pnpm lint:posts checks frontmatter, code block language specs, link integrity, image paths, formatting. I run it before commits. It catches things I'd otherwise miss, like a code block without a language tag or a broken image path.
Feature images must be 896x384 WebP. pnpm resize:images:check verifies, pnpm resize:images resizes. Small thing, but it removes one more manual step.
Posts start as drafts (draft: true in frontmatter), hidden from the site, RSS, newsletter, and cross-posting until I remove the flag.
CI/CD
One GitHub Actions workflow, triggered on push to main:
- Build with pnpm
- Deploy to VPS via SSH
- Backup previous build
- Install dependencies, copy static files to Nginx
- Restart PM2, health check
- Send newsletter for new posts
- Cross-post to Dev.to
- Cross-post to Hashnode
- IndexNow (Bing, Yandex, Naver, Seznam)
- Google Search Console notification
Steps 6 through 10 are non-blocking. Each runs on its own, failures don't roll back the deploy. I merge to main and go do something else. By the time I check back, the post is live, emailed, cross-posted, and indexed.
Numbers
- 13 new posts since migration (39 total)
- 179 commits
- 3 platforms (blog + Dev.to + Hashnode)
- 5 search engines notified on every deploy
- TTFB went from 800ms on Ghost to 120ms
What I Haven't Done
GA4 collects data but I don't have any custom dashboard tying post performance to cross-posting or newsletter numbers. I just check GA4 when I remember.
No A/B testing for titles. Cross-posting to multiple platforms gives me natural variation, but I'm not tracking it.
Feature images are still manual. Each one is a prompt, a generation, a resize. Takes about ten minutes per post. Not terrible, but it would be nice to automate.
How It Works Day to Day
The whole setup fits into how I actually work. I don't block out time for the blog. I have a gap between meetings, I open Cursor, tell Claude what to add or fix, and switch to something else. Next gap, I come back, review, maybe adjust the result.
Most features here were built exactly like that. The Hashnode integration, the IndexNow notifications, the linter improvements. None of them required a focused multi-hour session. Just a task described in plain language, handed off, reviewed later.
Thirteen posts in five weeks happened the same way. Writing them takes real attention though. I write everything myself; LLMs help me proofread, discuss ideas, and gather facts, but the thinking and the arguments are mine. Turns out, that process of writing down what I think forces me to organize my own ideas. That alone makes it worth doing, even aside from the blog itself.
