Project Info
I Over-Engineered a Static Site and Learned to Stop
I wanted a simple page that displayed a daily liturgical RSS feed in a custom layout and track how far along I was in Great Lent. No database, no server to maintain. A weekend project. Instead I spent days building something increasingly elaborate, watched it fail in production, tore most of it out, and ended up with something that works perfectly.
The Problem
RSS is a format websites use to publish a list of updates, like a newsfeed you can subscribe to. The site pulls from a third-party RSS feed and displays it in a custom layout.
The first obstacle: the browser refuses to fetch data from a different domain than the page it loaded from. It is a security rule called CORS. Moving the fetch to a server sidesteps it entirely, because servers do not have that restriction.
That meant I needed some server-side component to do the fetching. That is where I should have reached for the simplest possible tool. Instead, I built something impressive.
The Over-Engineered Version
My first design put a Cloudflare Worker at the centre. A Worker is a small piece of code that runs on Cloudflare's global network, close to the user, before the request ever reaches a server. It would fetch the HTML and the RSS feed at the same time, stitch them together, and return a fully assembled page to the browser in one trip.
Browser → Cloudflare Worker → AWS Lambda (HTML)
↓
RSS feed (cached at edge, refreshed hourly)
I also had AWS Lambda serving the HTML, so the page and the fetch logic could be updated independently. I had justifications for every decision. None of them were necessary.
Where It Fell Apart
The Worker worked in development. In production it failed silently.
The RSS feed provider blocks requests coming from Cloudflare's servers. The Worker would get refused, the data would be missing, and the page would render empty with no indication anything had gone wrong. I tried retry logic and fallback paths. Each fix added another layer to something already more complex than the problem required.
The edge cache was also harder to reason about than expected. I could not inspect it directly to see what the Worker was actually serving. Debugging meant guessing.
The more I tried to fix it, the clearer it became that the Worker was the wrong tool. I had added it because it was interesting to build, not because the site needed it.
The Simple Version
I stripped out Cloudflare Workers entirely and rebuilt around two things: AWS Lambda (a function that runs on demand without a dedicated server) can fetch the RSS feed without getting blocked, and S3 (Amazon's file storage service) can host a static website.
EventBridge (hourly cron) → Lambda → RSS feed → S3
↑
Browser loads page, reads cached JSON
An EventBridge rule, which is just a scheduler, triggers the Lambda once an hour. The Lambda fetches the feed, parses it, and saves the result as a small JSON file in S3. The HTML page, also hosted in S3, loads in the browser. A few lines of JavaScript read the JSON from the same bucket and render the content.
No CORS problem, because the browser is fetching from the same place it loaded the page from. No opaque cache, because the JSON file in S3 is just a file I can open and inspect. The whole thing is three files.
Cost
| Service | Monthly cost |
|---|---|
| AWS Lambda | $0.00 |
| AWS EventBridge | $0.00 |
| Amazon S3 | $0.00 |
All three services cover this workload within their free tiers.
Lessons Learned
Every layer I added had a justification. Together they added up to a system that was harder to debug, dependent on infrastructure I could not control, and broken in production in a way that took real time to diagnose.
The simple version has none of those problems. It is inspectable. It is boring. It works.
Complexity has to earn its place. I added Cloudflare Workers because it was an interesting thing to build, not because the site needed it. A scheduled function and a static file turned out to be enough.
The site is live at lent.velivasakis.tech.