The title of this post should tell it: Why use static pages for a blogging system?
I've had conversations with people who wrinkle their noses when they hear about static pages being used to deliver a blog -- as if an old technology automatically implied obsolescence instead of being time-tested and reliable.
Here's my reasons for using static pages in MeTal:
The first and most significant reason (for me) to use static pages is that it decouples the blogging engine from the blog itself. People shouldn't have to run your blogging engine just to see pages on your blog; they should just be able to see the pages, period. If your engine breaks or has to be taken offline, the least you can do is afford your users the benefit of a read-only copy. Static pages do this by default, and don't even need a CDN in front of them to make that happen.
WordPress in particular suffers, in my opinion, from too tight a coupling between the application and the Web server used to run it (mainly Apache). The blogging engine shouldn't be dependent on any particular Web server to work well, or to accomplish any of its particular functions.
Another advantage of separation of concern: the content doesn't have to be hosted in the same place as the blogging engine. You can have your content generated remotely and uploaded, or generated via one Web interface and presented to the world via another.
(Note that right now, with MeTal's alpha, I do have an .htaccess file that's needed to protect the application directory. But that's more in the vein of something that would have to be done on any Web server, and does not in my opinion constitute a tight coupling. It's just that Apache is the most common delivery target, and so I had to start somewhere.)
Static Web pages are disgustingly easy to optimize, no matter where they're delivered from or under what circumstances. Again, a CDN is a useful way to avoid having your site hammered, but having your content represented primarily as a set of static resources makes it even easier for CDNs to do their jobs.
Endless studies have been performed on how much lag users are willing to tolerate when they open a web site, especially one they haven't visited yet. You have only a few seconds, tops, before a user gets frustrated and closes his browser tab. Given that Web pages are routinely becoming multi-megabyte behemoths, it's a losing battle.
The less content has to be built on demand for the user, the better. A personal blog, or even a professional corporate one, hardly needs to be built on demand for every visitor!
Most of why static pages are slow for people has little to do with the rebuild process itself, and more to do with the fact that we have to sit and wait for the rebuild process to finish. In other words, it's about the rebuild process being a blocker.
If you have a site with 2,000 pages and you need to rebuild everything, it makes sense to have a way to do that in the background -- while you're doing other work -- and be pinged when you're finished. Also, it helps to rebuild in the proper order: most to least recent, with any immediately-changed items pushed to the top of the queue so they show up first. (Movable Type used these ideas, and so it made sense to emulate them.)
A lot of the burden of regenerating static pages can also be lessened by smart use of things like server-side includes. If you have a sidebar that changes constantly -- or even only once in a great while! -- it makes more sense to leverage that as an SSI and rebuild only that, instead of rebuilding the entire site because of one petty change. Likewise, if you perform a search-and-replace, it makes sense to rebuild only the pages that were changed, not everything, and that's not too difficult to optimize for.