No matter what anyone says, hosting Rails applications is – and is likely to remain for the foreseeable future – a cast-iron bitch. An expensive, slow one, too.
I’m no stranger to web hosting. I set up my first web server over 10 years ago – manually, of course, none of these control panels, and have done it many times since, as technology changed and languages shifted. I’ve been responsible for the production web presence of a number of companies, and along the way I’ve dabbled with a number of server-side application technologies, including ColdFusion and even Java (ugh!) but mostly PHP, since that’s what everyone uses.
And for good reason. PHP has many flaws as a language, mainly to do with security and the hideous mess most projects in PHP comprise, but it’s a joy to deploy. Here’s how it works.
PHP is actually two things. It’s a language, and a compiled executable which will interpret and act on commands issued in that language. It integrates nicely with the most popular, and largely considered best, front-end web server software, apache, so that what happens when a PHP web page is requested is as such:
- Apache reads the web page, typically page.php, and determines it requires action by the PHP interpretor, and creates an instance of that
- The commands in the page are relayed to the PHP interpretor instance
- The interpreter processes the commands and performs whatever action is required, typically HTML generation and/or database actions
- Apache receives back the data and serves it inline with the rest of the web page.
This all happens extremely quickly, and since PHP is integrated so tightly with Apache, you only have to worry about maintaining that one application. Sure, you configure PHP – via a single .ini file. Everything else is Apache all the way, and it’s a highly reliable and very fast way to serve web pages.
An especially welcome benefit from this arrangement is that you only require one Apache server running to serve any number of PHP applications. You can easily configure Apache to serve any number of domains via its VHosts system, any or all of which may contain active PHP pages. Apache simply instantiates (and destroys) PHP instances, which respond to the relevant pages, whenever required. The PHP executable doing the interpreting neither knows nor cares where the commands it’s interpreting come from. You’ve effectively created a “pool” of servers, any of which can respond to any PHP application, and Apache can balance its load amongst them as required.
The situation with a Rails application is very different, and not in a good way.
The current “best practise” for deploying a single Rails application is as follows:
1. Apache server front end, with a vhost interpreting the URL and relaying relevant requests to a cluster of mongrel servers, addressable by (local) IP
2. User then configures and maintains a seperate cluster of mongrel servers dedicated to that single Rails application and responding on configured IPs as set in Apache
3. Apache is further configured to go through mongrel for every non-static request.
This is then repeated for every Rails application on the box.
This has several disadvantages compared to the situation with PHP, to wit:
- Administrator becomes responsible for setting up, maintaining, and constantly monitoring a collection of independently-running applications, in addition to the core Apache server
- Mongrel servers not currently being used are not destroyed, but continue running, wasting memory (every instance of mongrel currently running on my server uses in excess of 100M each)
- Members in a cluster must be predefined and cannot be allocated on the fly according to need. For example, If you have 1GB of memory and 3 Rails apps, you are limited to distributing about 10 instances of mongrel between them. You must decide this in advance. If one of the applications suddenly receives a lot of hits, performance will suffer while the other mongrels, for the other apps, sit unused – the opposite of the “pool” you have access to in a PHP situation
And this is before even mentioning the underlying speed of mongrel, Ruby and Rails in general, which can only be described as horrible.
So, the sequence of events when a page request for your Rails app comes in is as follows:
- Apache receives request, determine the request is for your rails app
- Apache picks a server in your mongrel cluster, and sends the request (NOT commands, as in PHP) to it
- Mongrel, moving with the lightning speed of a slug through treacle, goes through the request page and sends the relevant parts to the ruby interpreter, itself not much of a speed demon, which, after a lengthy time out for self-reflection and “me time” reluctantly listens to, thinks about, thinks about some more, and then finally responds to the command. Repeat for every ruby command on the page
- The mongrel cluster receives and compiles the completed requests and, taking its sweet time, eventually manages to respond with the complete page
- Apache passes on the completed page to the client.
Not many more steps there, I admit. But it’s the speed of it which is the killer. PHP commands are detected, seperated from enclosing HTML and dispatched from within Apache, a lightning-fast and time-proven, highly reliable web server written in C. But in Rails, the job of going through the page to find Ruby requests and dispatch them is performed by Mongrel – itself written in Ruby. I don’t have any figures depicting how much slower it is, but it is – a lot.
Combine that with the inflexibility of distributing resources where needed, and having to independently spawn and monitor a seperate cluster of servers for every seperate application and you’ll understand that when people say hosting Rails is a nightmare, they’re right.
Rails has a lot of things going for it. It’s a joy to write in, mostly, and the development environment is great. It’s much, much more productive than the likes of Java while being much, much nicer than PHP.
But it’s slow as hell, and deployment sucks, and don’t let anyone tell you otherwise.