Posts Tagged ‘rails’

Rails 2.3.5 still broken on 1.9.1

Sunday, November 29th, 2009

So Rails 2.3.5 is out and while I don’t really know what’s new, I can tell you what isn’t – not running on the current stable version of the ruby programming language as reported by Ruby-Lang.

If you would like to start testing Rails on 1.9, you’ll need to apply the patches listed at this Lighthouse ticket, which fix the two most serious problems. A kind user down the bottom of that ticket has pulled the patches into monkeypatches suitable for just slipping into your config/initializers folder – I can confirm that they work for me.

I cannot fathom Rails Core’s bizarre refusal to pull these patches into 2.3.5. Yet another release goes out that is completely non-functional on the newest, fastest and best version of the language, forcing users to seek out and apply non-official monkeypatches – or just forsake 1.9 altogether, as the vast majority seem to be doing. In what way is this in the best interest of the framework or the community? What is holding them back? The patches might not be 100% perfect but they’re a lot better than what we have now, which is that out of the box Rails is broken on 1.9.1.

UPDATE: Happily, the 1.9.1-patched Rails does seem to be working on 1.9.2 as well. I’ve installed ruby 1.9.2dev (2009-09-07 trunk 24787) and my favourite “fussy site” is working there, too. That’s great news, because 1.9.2 is going to be the “serious” version of Ruby 1.9 and has further big performance increases.

Rails Metal MongoDB GridFS access

Friday, November 20th, 2009

I’m kind of new to Metal (and all things Rack) but this works. Just for reference.

# Allow the metal piece to run in isolation
require(File.dirname(__FILE__) + "/../../config/environment") unless defined?(Rails)
class ImageShow
 
  def self.call(env) 
    request = Rack::Request.new(env)
    if request.path_info =~ /^\/show_image\/(.+)$/
      if GridFS::GridStore.exist?(Media.database, $1)
        GridFS::GridStore.open(Media.database, $1, 'r') do |file|
          [200, {'Content-Type' => file.content_type}, [file.read]]
        end
      else
        [404, {'Content-Type' => 'text/plain'}, ['File not found.']]
      end
    else
      [404, {"Content-Type" => "text/html"}, ["Not Found"]]
    end
 end
 
end

That will need to be saved as image_show.rb. Slow .. but not as slow as doing it through a Rails controller.

Rails kinda running on 1.9.1

Wednesday, November 18th, 2009

So I’ve been ranting recently about Rails’ lack of Ruby 1.9 support. And rightfully so! It’s inexcusable for the #1 Ruby framework to not support the fastest, most efficient version of Ruby, especially not after it’s been available for almost 2 years.

But for some people, it is kind-of possible to use Rails on 1.9 today. Rails’ 2.3-stable (“stable” in quotes) branch has fixed a few of the worst problems in 2.3.4, such as the string comparison bug which made authentication systems unworkable. The showstoppers for me, like the utf8 in templates issue, remain. I wouldn’t rely on it in production – not for a site which your job depends on, anyway – but as long as you don’t use certain features of rails (like i18n), and don’t need to deal with utf8 much if at all, you may be able to use it.

You should want to switch to 1.9.1 as soon as possible. I see between 20% and 50% higher performance than 1.8.7, with memory usage significantly lower and even better, less prone to leakage. In one of my projects there is a daemon which does nothing except wait for items to hit a queue and then launches a script to deal with them. In 1.8.7, it would gain a meg or so every few hours and I’d be nervous about leaving it running unmonitored. In 1.9.1 it sits rock solid on 6.5M for weeks on end. I am totally sold on 1.9.1.

Anyway, here’s the instructions to get Rails up to the latest possible usable state. Open a terminal at the root of your Rails app:

# head into /vendor
$ cd vendor
 
# clear out any other rails
$ rm -rf rails/
 
# clone rails into here
$ git clone git://github.com/rails/rails.git rails
Initialized empty Git repository in /Users/sho/projects/myproject.com/vendor/rails/.git/
remote: Counting objects: 123204, done.
remote: Compressing objects: 100% (27700/27700), done.
remote: Total 123204 (delta 95101), reused 121853 (delta 94037)
Receiving objects: 100% (123204/123204), 21.72 MiB | 380 KiB/s, done.
Resolving deltas: 100% (95101/95101), done.
 
# enter that directory
$ cd rails/
 
# checkout 2.3 stable, ignore the errors
$ git checkout -b  2-3-stable remotes/origin/2-3-stable
warning: unable to unlink arel: Operation not permitted
warning: unable to unlink rack-mount: Operation not permitted
Branch 2-3-stable set up to track remote branch 2-3-stable from origin.
Switched to a new branch '2-3-stable'
 
# go back to the root dir 
$ cd ../..
 
# run rails as 1.9.1. get ready to find out which gems you forgot!
$ ruby1.9 script/server
 
# hello, console
$ script/console --irb=irb1.9
Loading development environment (Rails 2.3.4)
>> RUBY_VERSION
=> "1.9.1"

And there you go. It’s working, kind of. Try it out.

I would probably recommend using thin as the web server, but if you need a working mongrel in 1.9, my gem is still available:

sudo gem1.9 install sho-mongrel

That will probably die when github pulls gems for real but who knows, maybe the official mongrel will have been fixed by then (HA!).

Meanwhile, I’m going to step up my harassment of the Rails Core team to get this crap fixed; it is a fucking JOKE that it still doesn’t work 100%.

update: moved the gem to gemcutter.

Adrift on Rails

Monday, November 16th, 2009

Ruby on Rails development seems to be stalled, with no substantial release since March and many show-stopper bugs in the current version, 2.3.4, such as the claimed Ruby 1.9 support which does not work, meaning that the official Rails gem doesn’t run on the only version of Ruby that doesn’t leak memory like a firehose leaks water. Freezing to edge is broken and has been broken for months. Needless to say, that’s because master is now Rails 3.0pre which is in a totally unrunnable state.

The seems oddly low-volume, considering 2.3.5 is over a month late and 2.3.4 has so many problems. Working patches to fix critical errors languish unmerged for months, constantly needing to be regenerated to keep up – not that there’s anything official to keep up with, since there’s no real 2.3.5 target and master is fixated on the faraway 3.0. The Lighthouse tracking system is itself a hopeless, unnavigable mess.

So what’s going on with Rails? With the historical core developers absent or tied up with reinvent-the-world expeditions like the 3.0 rewrite, the projects appears directionless and adrift. Meanwhile the shine fades and word leaks out that all is not well in Railsville.

What happened? Two causes.

Firstly, the shoot-for-the moon total rewrite for Rails 3, leaving Rails 2 languishing for the better part of a year now and Rails 3 in a totally unrunnable state.

Secondly – well, this is just my opinion. But one of the reasons Rails was great at first because it was written by people who actually wanted to use it to make actual proper web apps. And then it became famous, and then it attracted another type of developer – people who just wanted their membership in Rails Core on their resumé – but they weren’t going to actually use it for real-world projects. So instead of getting fixes to seven-month-old show-stopper bugs, we get Engines and RESTful Resource Mapping and Application Templates, none of which are much used or useful in the real world.

Rails is looking sicker by the month. And since Core has bet the company on Rails 3.0, it better blow the world away and fix everything, or I have a feeling a lot of Rails devs are going to start looking for the next big thing. Or actually, the next small thing. I would trade off 90% of Rails’ useless bloat and feature creep for a minimalist framework that actually works, has developers who care, has a readable codebase and a clear direction. Basically, what Rails used to be before DHH got tired of saying “NO!” and it turned into a stumbling monster.

Sinatra? It’s possible but it’s a little too unstructured; great for a tiny API but I’d be worried about trying to use it for a large project. Django is by all appearances in even worse shape than Rails. Ramaze? Looks very interesting.

Update to turn down the tone a bit and correct some links. I will follow up with some constructive posts later; it is actually possible to run Rails on 1.9, just takes some work.

Image watermarking with CarrierWave

Friday, November 13th, 2009

I love the library for handling image uploads. But how can we do watermarks?

Easy. Prepare a transparent PNG with your logo, then add the following rule to your Uploader class:

  def watermark(path_to_file)
    manipulate! do |img|
      logo = Magick::Image.read(path_to_file).first
      img = img.composite(logo, Magick::SouthEastGravity, Magick::OverCompositeOp)
    end
  end

Now call it like this:

  version :medium do
    process :resize_to_fill => [500,500]
    process :watermark => ["#{RAILS_ROOT}/public/images/logo.png"]
  end

Bingo, a beautiful watermark.

Rails can’t scale!

Tuesday, September 15th, 2009
Processing AdminController#do_import (for 124.170.0.0 at 2009-09-14 11:33:51) [POST]
  Parameters: {"commit"=>"submit", "authenticity_token"=>"qUBkNAAEB7EM4SGQRnlM+uaKlqmVM09+8l4sCFhPvBw=", "import_order"=>{"number"=>"5000"}}
Redirected to http://adomain.com:3000/admin/import
Completed in 6026460ms | 302 Found [http://adomain.com/admin/do_import]

Er, obviously this is not Rails’ fault. It’s an import script processing large numbers of records, and it blocks while it’s doing so. I really should farm it out to a worker daemon, but it’s just a one-off thing as I set up a new site, so I can’t be bothered going to all the trouble for something that will only be used once.

But I am quite proud of that number – almost 2 hours for a single request. Lucky Mongrel doesn’t time out requests, unlike, say, Passenger.

UPDATE: another one

Completed in 7922222ms | 302 Found [http://adomain.com/admin/do_import]

Rails is a ghetto all right

Tuesday, April 28th, 2009

Remember Zed Shaw’s “Rails is a Ghetto”? All about how the Rails community is being taken over by boring, pretentious, corporate wanna-be suits with their prattle about “professionalism”, pretentious posturing, MBA peacocking and assorted other behaviours unwanted in the dynamic and creative Ruby community.

Well, these MSCE-having douchebags are out in full force today. You see, there was this little Ruby conference in San Fran, and a presenter there had the temerity to include – wait for it – references to porn in his talk. Plus some pictures of scantily clad girls.

OH NOES

Now, I thought the talk was fine, and very well made. The content may have been slightly risqué, in a US context anyway, but no big deal. Actually I appreciate the creativity – there’s nothing more boring than yet another “professional” presentation. I use Ruby because I hate acting “professional”.

But oh boy, the reaction.

Some commentors were frankly laughable – one woman in particular went so far to claim the presentation made her fear for her safety, as if the entire body of men in the audience were in danger of rising as one and assaulting her in some kind of insane porn-fueled gang rape:

To most of these men around me, I am, at best, an oddity, and at worst, a sexual target. I feel a little less safe.

I suggest this poor woman seek therapy for these delusions.

Other reactions include pathetic “I am being victimised” attention-seeking, lame attempts at demonstrating how much “I truly care about women” etc, hilarious “I am leaving the Ruby community and re-installing Visual Studio” threats (please do!), and every combination thereof. I cannot help but think that if Matt’s presentation has the effect of getting rid of these disingenuous wowsers then he should henceforth be invited, nay required, to present at every Rails conference.

Comments from the SlideShare page from sad Java programmers:

I’ve talked with two large IT shops who’ve said that there is no way they’ll ever let Ruby in their companies because of this horse shit.

Lol, good. The less “large IT shops” pissing in the Ruby pond the better. And if their management is so stupid as to choose technology because of “some presentation given by some guy at some conference” then no doubt they’ll be out of business soon anyway.

I think Matt just blew the image of the Ruby community by doing something like this.

WTF? Who even thinks like this? Whatever, I am glad they do:

I’ll avoid joining the Ruby/Rails community at all. So will many other engineers, given the remarkable and continual display of unprofessionalism from you, the community, and even its leaders

Uh, OK. Door’s that way. By the way, you’re not a real engineer, did you know?

Is this the ruby community , sexist and racist? I am going back to .net and FU all!

Lol, enjoy! Wait, racist?

And so on and so on. Pretty funny. Thanks for the GC patch, Matt, it’s proving very effective.

About the only person whose reaction to this I truly admire is . And he is the father of Rails, so if he doesn’t mind it, then all these “this is inappropriate for a professional conference blah blah” business-card-offering synergystic-opportunity-seeking suit-wearing jerks can go jump off a cliff, IMO.

Thin: Ruby 1.9.1 vs. Ruby 1.8.6

Friday, March 27th, 2009

Thin is the only standalone web server that’s compatible with both 1.8 and 1.9, so let’s test how they go head to head.

I did just the most cursory of tests, using a recent web site I’d been involved on, running Rails 2.3.2. Tests are with and without the DB turned on, which slowed us down by a factor of 10 and wasn’t testing Ruby so much as this dog-slow laptop disk anyway.

DB off:

1.9.1: 212.15 req/sec
1.8.6: 178.64 req/sec

DB on:

1.9.1: 23.53 req/sec
1.8.6: 19.49 req/sec

So for this particular (pretty simple) site, we get about a 10-15% speed increase, which is surprisingly little – it “feels” faster than that. I look forward to testing 1.9.1 further on more complex installations, and with Passenger. Locally, I’ve switched over to 1.9.1 for most development now, and will move over completely once Rails gets its fucking act together.

This is what I hate about Rails

Tuesday, February 17th, 2009

This is what I hate about Rails:

  1. New feature added to Rails 2.3.0RC1 allowing simple localisation by template naming convention – ie, if locale is set to :ja and an index.ja.html.erb file is present, that gets rendered in preference to any other index files. Cool!
  2. New feature doesn’t work properly and exhibits an insidious bug where it manages to set a wrong Content-type in the HTTP headers, causing all sorts of nasty, hard-to-troubleshoot problems
  3. Problem is reported
  4. Rails Core replies “wontfix”

The “fix” Josh suggests, needless to say, doesn’t fix anything.

That is utter bullshit. A major new feature of Rails 2.3.0, as reported in Ruby Inside and the Riding Rails blog itself, doesn’t work. Has no-one else tried it? Has anyone used this at all? Are me and the other commenters on this ticket the only 3 people in the world who ever tried to use this feature?

There is no fucking excuse in the world for Rails to silently be sending a wrong Content-Type, no matter what.

Wednesday, February 11th, 2009

I’d been blissfully ignorant of Gregg “I don’t have enough G’s in my name” Pollack’s latest yawn-cast until the inimitable Wincent Colaiuta brought it to my attention. I’ve been unimpressed in the past by Mr Pollack’s shaky grasp of what he’s talking about, not to mention his nasty money-grubbing antics surrounding RubyConf 2008. However, I am but a (highly immature) man, and so similarly tantalised by the promise of a “rant”, I grabbed episode 5 and gave it a look.

My first thought is that Mr Pollack doesn’t really know what the phrase “scaling rails” means – no surprises there, since he didn’t know what “scaling ruby” meant either – ie. nothing, since you can’t “scale” a god damn language. He seems to be confusing it with performance and responsiveness – desirable attributes for sure, but not what I understand “scaling” to mean. The first few episodes do not even mention scaling, focussing on client-side perception of page load speed. Indeed, his first mention of “scaling” in the “Scaling Rails” series is this:

Before you attempt to Scale your Rails application, you need to know where and how to scale it.

Surely an insight for the ages there. I would write that down right now, if I could figure out what the hell it is supposed to mean. Apparently in Gregg’s world, “scaling” is something you do on a lark, just for fun, maybe on a rainy day or if you’re just plain bored. Hm, I don’t have anything much to do today – I think I might scale my websites! Before I attempt that though, I’d better learn up on where and how to scale them. Take it away Gregg!

Unfortunately, Gregg isn’t interested in, or aware of, any “how” that isn’t “caching” and any “where” which isn’t “uh, in the bits which will be cached”. Every single episode of the series so far focuses on caching strategies. Again, HTML caching is a useful and important tool to improve performance and efficiency but that’s merely one variable in the scaling equation. It’s a “vertical” optimisation, not a horizontal strategy, which is what my conception of the art of scaling is all about.

However, even Pollack’s advice on HTML caching leaves a lot to be desired. As Wincent points out:

Next up, he shows how you can include dynamic content on a statically cached paged. This is where he goes off the rails. His recommendation is that you use an AJAX callback to pull down the dynamic data. So let me get this straight: you’re caching the page to avoid hitting the Rails stack, but then you do a request that does exactly that (hits the Rails stack) to fill in the dynamic content…

It’s important to note that there’s nothing wrong with this technique per se. If a page is very expensive to generate, but just needs one little piece of dynamic content, this kind of thing can be very useful.

However, I think Pollack is barking up the wrong tree with the use he’s considering – a dynamic log in/log out link. His suggestion as written is to use AJAX to write in the login status, on every page in the site regardless of whether it’s cached or not. In fact, as written, if the page doesn’t get cached, the login status will be written twice! Not to mention that he’s forcing users to pull down the entire Prototype JS library just to implement this one little thing.

I suspect Mr Pollack is really just speaking about his own experiences here. His sites tend to be very content-heavy, very static, and it’s fairly obvious by looking at, say, EnvyCasts, that his experiences there have informed the strategies he passes on in the screencast. However, I would submit that his site is actually a fairly niche example. It’s a highly cacheable, rarely-changing site which indeed only really requires a single little bit of dynamic content – the login.

Which brings me to my main dispute with Pollack’s screencast. No, login status is not overrated. It’s highly useful to be able to check at a glance if you’re logged in to a site – especially with many sites’ less than perfect autologin functionality. It’s equally useful to be able to log out with a single click, especially if you have multiple accounts on a site. Forcing your users to click through a “my account” page – which may or may not even take you to your account, if you’re not logged in, that is if you have an account at all – is just lazy.

Especially since, as Wincent points out, it’s trivial to simply implement the login status client-side, using cookies and a bit of javascript:

I for one really appreciate the visual feedback that shows me whether I’m logged in or not; and given that implementing it is basically zero-cost using the method I’ve just described, I don’t see any reason why not to.

Indeed not.

The rest of the screencast is more of what we’ve come to expect from Mr Pollack & co. – a superficial treatment of things you probably already knew, or could easily find out. If you’re such a beginner that you don’t know anything about page caching, I recommend heading over to railscasts, starting from the beginning, and then just watching them all.

This video might be of some interest as a kind of summary, if you can sit through it – not only do you get to stare at Mr Pollack’s annoying face the whole time, the sound is totally out of sync, making it even more annoying. Pollack has an irritating habit of favouring the camera with a “knowing look” whenever he imagines he is making a particularly profound revelation, which is sure to have you searching frantically for the “press here to electrocute presenter” button:

One way we can do it, is by adding an … AJAX callback (long, tension-filled pause)

At least, though, Mr. Pollack has finally settled on the right price for his screencasts.

PS. As an aside, I thought the name of the company sponsoring Pollack’s latest effort, New Relic, was kind of ironic. A third party website trying to charge money for app monitoring? That does seem like a relic of a bygone age. And the service does seem to be new, hence a New Relic. I wonder if that was intentional. The name certainly makes no fucking sense at all elsewise. UPDATE: Their service actually does look pretty good. Good luck to them. Still a ridiculous name but hell, what isn’t.

CouchDB session model for Rails

Friday, September 19th, 2008

Here’s my initial stab at a Rails Session model for CouchDB. The marshalling stuff is taken from the example SQLBypass class in the ActiveRecord code.

You’ll need a recent and trunk CouchDB, probably.

class CouchSession < Hash
  @ = CouchRest.database!('http://localhost:5984/sessions')
 
  attr_writer :data
 
  def self.find_by_session_id(session_id)
    self.new(@.get(session_id))
    rescue
    self.new(:id => session_id)
  end
 
  def self.marshal(data)   ActiveSupport::Base64.encode64(Marshal.dump(data)) if data end
  def self.unmarshal(data) Marshal.load(ActiveSupport::Base64.decode64(data)) if data end
 
  def initialize(attributes = {})
    self['_id'] = attributes['_id'] ||= attributes[:id]
    self['marshaled_data'] = attributes['marshaled_data'] ||= attributes[:marshalled_data]
    self['_rev'] = attributes['_rev'] if attributes['_rev']
  end
 
  def data
    unless 
      if self['marshaled_data']
        ,  = self.class.unmarshal(self['marshaled_data']) || {}, nil
      else
         = {}
      end
    end
    
  end
 
  def loaded?
    !! 
  end
 
  def session_id
    self['_id']
  end
 
  def save
    self['marshaled_data'] = self.class.marshal(data)
    self['data'] = data
    self['updated_at'] = Time.now
    save_record = @.save(self)
    self['_rev'] = save_record['rev']
  end
 
  def destroy
    @.delete(self['_id'])
  end
end

Nice and short – possibly the shortest Rails session class I have seen. The beauty of CouchRest/CouchDB! And we descend from hash so we can just save the object straight – after marshalling, of course. Cool, huh?

Note that I am actually writing the raw data as well as the marshalled data into the saved doc, for troubleshooting/interest purposes. Feel free to remove that.

Not pretty, but it works. Just save it like a normal model. You’ll need to put these into environment.rb:

config.action_controller.session_store = :active_record_store
CGI::Session::ActiveRecordStore.session_class = CouchSession

Note also that I have ignored any differentiation between the record ID and the session ID, negating the need for any special overrides in ApplicationController. However, the session IDs Rails generates are large and you might find them unattractive in CouchDB – it would be fairly simple to separate them, but then you’d need a new map view and an override. I feel it’s simpler to just use the Session ID as the doc ID and damn the torpedoes. YMMV.

Improvements? See something wrong with it? Let me know! ;-)

Rails – getting milli/microseconds into strftime

Wednesday, August 20th, 2008

I think this is the only way to do it.

>> Time.now.strftime("%Y/%m/%d %H:%M:%S.#{Time.now.usec} %z")
=> "2008/08/20 01:22:28.367899 +0000"

At the microsecond level, it’s possible for Time.now to change mid-evaluation, so if you *really* care about timing you could freeze the time object first and read from that:

>> now = Time.now
=> Wed Aug 20 01:40:26 +0000 2008
>> now.strftime("%Y/%m/%d %H:%M:%S.#{now.usec} %z %Z")
=> "2008/08/20 01:40:26.597940 +0000 UTC"

Happily, Time.parse will read that straight out of the box:

>> n = now.strftime("%Y/%m/%d %H:%M:%S.#{now.usec} %z %Z")
=> "2008/08/20 01:40:26.597940 +0000 UTC"
>> Time.parse(n)
=> Wed Aug 20 01:40:26 +0000 2008
>> Time.parse(n).usec
=> 597940

Too clever

Tuesday, July 8th, 2008

Another in my series of “lessons I’ve learnt as I progress in my journey from rank amateur to serious developer”.

Some time ago I posted a series of articles discussing the use of “advanced” database techniques with Rails, mostly utilising Postgres. See here:

Switching to PostgreSQL
Strict databases are great for discipline
hacking native UUID support into Schema:Dump
UUIDs in Rails redux

Anyone visiting those old articles these days will notice a new update: don’t do this. Why, you might ask? Since I thought it was such a good idea at the time.

Turns out that being “clever” with your DB setup is almost certainly more trouble than it’s worth.

Sure, using PGSQL schemas to share tables between your applications seems like a nice, elegant solution compared to throwing everything in one DB and using table namespacing to distinsguish between them. Sure, using the schemas saves you from having to define table names in every model where you access a “foreign” table. Unfortunately, it’s a giant pain in the ass to maintain and constricts your use of migrations.

Sure, hacking Rails to read and write PGSQL’s native UUID format is elegant and looks kewl in your migrations, etc. However, maintaining it is a giant pain in the ass and kills cross-DB compatibility.

These are examples of me being “too clever”. Not trying to brag or anything here, I’m using the term in a self-deprecating manner. What happened was that I, a relatively inexperienced developer, thought I’d found some “cool” solution to a problem I was having – or imagined I was having. I then “solved” the problem in a manner which was worse than anything the “problem” itself presented.

DO NOT use postgresql schemas unless you have a damn good reason for it. “Making your tables look nice” is not a good enough reason.

DO NOT use database-specific data types unless you have a damn good reason for it. Again, “looking nice” is not nearly enough.

UUIDs are strings, pure and simple. Using a UUID data type in PostgreSQL gains you one thing – checking that anything you try and save in there looks like a UUID. That should be in your own tests. What you lose is cross-platform compatibility, ability to use standard Rails, reliance on hacks to schema dumper, and more. And they’re strings, right? So store them in a fricking String field. Anything else is just not worth the hassle.

Schemas seem cool and elegant, and they are. Unfortunately, what you want isn’t “cool and elegant”, it’s “easy to work with and convenient to maintain and update and migrate and change around when you need to”.

It took me hours, days, to implement the changes to use the above features of PGSQL. It took a couple of hours to undo all of that and go back to standard data types, standard layouts, minimum complexity. Wasted work and time? Sure, but I learnt the lesson I’m trying to convey here – keep it simple, don’t be clever, maintain cross-compatibility and stick to the lowest common denominator unless you have a really, really good reason not to – and mine didn’t count in the end.

I’m still using PGSQL, of course – I like it, and even MySQL 6 inexplicably still doesn’t offer sub-second precision in DATETIME or TIMESTAMP, which I want. And I’m certainly not moving away from the UUID approach, which I firmly believe is best – all of that still stands. But I’ve gotten rid of all the special data structures and non-standard migrations and database-specific data types, because the miniscule aesthetic benefit just was not worth the loss of freedom and convenience everywhere else.

Live and learn, eh.

Rails now generally running on Ruby 1.9.0

Friday, June 6th, 2008

Amid all the RailsConf hype about Rails 2.1.0 and Maglev, not much attention seems to have been given to the fact that Rails is now generally, mostly runnable on Ruby 1.9.0.

Rails running on Ruby 1.9.0

Here’s the Ruby-Core message confirming the news. Not all tests are currently passing – almost all of them related to the TZTIME functionality introduced in 2.1.0, and needless to say mongrel still doesn’t work. Nor do the postgres or mysql gems, for that matter, although sqlite3 seems good to go – you ain’t gonna be switching to 1.9 anytime soon on your production site.

However, it runs, and if it runs, we can benchmark it!

I’ll use my /api/pulse controller action from previous testing. This is going to be Webrick-only, thin installs and seems to work until I load a page and then bombs out. Anyway.

The only modifications to the default rails application I am making here are to change the shebang line in script/server to point to 1.9, and to remove a couple of lines in boot.rb which check for RubyGems versions but bomb out for some reason in 1.9. Everything else is clean and Rails is installed as a gem under the 1.9.0 tree.

Webrick still has some problems under 1.9.0. Running with too high a concurrency seems to freak it out on 1.9.0, so I’ve turned the concurrency down to 5. Also, 1.9.0 is handily faster than 1.8.6 in development mode .. but in production mode, uh, you’ll see.

Ruby 1.8.6:

$ ab -n 500 -c 5 http://0.0.0.0:3000/api/pulse
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
 
Benchmarking 0.0.0.0 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Finished 500 requests
 
 
Server Software:        WEBrick/1.3.1
Server Hostname:        0.0.0.0
Server Port:            3000
 
Document Path:          /api/pulse
Document Length:        2 bytes
 
Concurrency Level:      5
Time taken for tests:   2.67175 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Total transferred:      139000 bytes
HTML transferred:       1000 bytes
Requests per second:    241.88 [#/sec] (mean)
Time per request:       20.672 [ms] (mean)
Time per request:       4.134 [ms] (mean, across all concurrent requests)
Transfer rate:          65.31 [Kbytes/sec] received
 
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     9   20  13.1     17      85
Waiting:        7   17  12.5     15      84
Total:          9   20  13.1     17      85
 
Percentage of the requests served within a certain time (ms)
  50%     17
  66%     18
  75%     18
  80%     19
  90%     20
  95%     25
  98%     84
  99%     84
 100%     85 (longest request)

Ruby 1.9.0:

$ ab -n 500 -c 5 http://0.0.0.0:3000/api/pulse
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
 
Benchmarking 0.0.0.0 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Finished 500 requests
 
 
Server Software:        WEBrick/1.3.1
Server Hostname:        0.0.0.0
Server Port:            3000
 
Document Path:          /api/pulse
Document Length:        2 bytes
 
Concurrency Level:      5
Time taken for tests:   3.921955 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Total transferred:      139000 bytes
HTML transferred:       1000 bytes
Requests per second:    127.49 [#/sec] (mean)
Time per request:       39.220 [ms] (mean)
Time per request:       7.844 [ms] (mean, across all concurrent requests)
Transfer rate:          34.42 [Kbytes/sec] received
 
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    21   38  10.5     36      83
Waiting:       13   24   9.4     22      68
Total:         21   38  10.5     36      83
 
Percentage of the requests served within a certain time (ms)
  50%     36
  66%     38
  75%     39
  80%     40
  90%     46
  95%     69
  98%     73
  99%     74
 100%     83 (longest request)

Well, I said it runs, I didn’t say it runs well. Seems to be a problem with Webrick, probably related to the concurrency issue. By comparison, in development mode the numbers are about 42reqs/sec (1.9.0) vs around 33reqs/sec(1.8.6). Still – giant leap from even a couple of months ago, and I look forward to further leaps in the near future.

The next step is to get a real web server running. This will probably be Thin, once I figure out how to get around the , I know that it should work, others are already using it on 1.9, though not with Rails. Any ideas welcome!

UPDATED to try and workaround concurrency issues with Webrick under 1.9.0
UPDATE 2: Duh, I was running on development.

Maglev and the naiivety of the Rails community

Monday, June 2nd, 2008

UPDATE: Corrected a couple of typos. Didn’t correct the spelling error in the title because I am enjoying being .

I would like to point out also that this is a rant about vapourware and miserably unmet standards of proof – the benchmarks at RailsConf are worthless and prove nothing, but I would dearly love to be wrong.

And also note that I said I consider a dramatically faster Ruby interpreter/VM impossible until conclusively proven otherwise. I didn’t say completely impossible; I hope it is in fact possible to speed up Ruby by 10x or more. It seems unlikely, very unlikely, but who knows. I am in no way an expert on these things, and do not claim to be; I am only reacting to their hype-filled presentation, and drawing comparisons to the recent history of everyone else’s experiences writing Ruby interpreters sans the 60x speedup.

The demonstration at Railsconf was useless, empty hype, and until extraordinary proof is presented, I will remain deeply skeptical of these extraordinary claims.


So there’s been some presentation at Railsconf 2008 about a product called “Maglev“, which is supposedly going to be the Ruby that scales™ (yes, they actually use the trade mark). This new technology is going to set the Ruby world on fire, it’s going to be the saving grace of all Rails’ scaling problems. It’s going to make it effortless to deploy any size Rails site. Its revolutionary shared memory cache is going to obsolete ActiveRecord overnight. It runs up to 60x faster than MRI. And it’s coming Real Soon Now.

Every rails blogger and his dog have posted breathless praise for the new saviour:

Slashdot | MagLev, Ruby VM on Gemstone OODB, Wows RailsConf
RailsConf 2008 – Surprise of the Day: Maglev
MagLev is Gemstone/S for Ruby, Huge News
MagLev rocks and the planning of the next Ruby shootout

So what’s the problem? Why am I being such a party pooper and raining on the new Emperor’s parade?

Because these claims are absolute bullshit and anyone with a hint of common sense should be able to see that.

Right now, there are about 5 serious, credible, working Ruby implementations – MRI, YARV, JRuby, Rubinius, and IronRuby. They all have highly intelligent, experienced, dedicated staff who know a lot more about writing interpreters and VMs than I could ever hope to learn.

So do you seriously think that all these smart people, writing (and collaborating on) all these projects have somehow missed the magic technique that’s going to make Ruby run 60x faster?

It’s definitely possible to get a 2x speedup over MRI and retain full compatibility – Jruby and YARV have shown us that. Maybe it’s possible to get a 3x or 4x broad-based speedup with a seriously optimised codebase. And sure, a few specific functions can probably be sped up even more.

But a broad 20x, 30x, 50x speedup across the whole language beggars belief. It is a huge technical leap and experience suggests they don’t just suddenly happen all at once. Speed gains are incremental and cumulative, a long race slowly won, not an instant teleport into the future. I’d say it is almost impossible, until spectacularly demonstrated otherwise, for a brand new, fully compatible ruby implementation to be more than two or three times faster than today’s best. Things just don’t work that way. Especially things with such a broad range of smart people working hard on the problem.

Extraordinary claims require extraordinary proof. But what do we get? A couple of benchmarks running in isolation. Who knows what they actually are, how tuned they are, whether they’re capable of doing anything other than running those benchmarks fast (I doubt it). No source. No timetable for the source, or anything else.

The bloggers say “this is not ready yet but when it is .. WOW!”. They’re missing the point. Until this thing is actually running Ruby, it’s not Ruby. Benchmarks on a system which isn’t a full implementation of Ruby are utterly worthless. I can write some routine which messes around with arrays in C which is a hundred times faster than Ruby. I might even be able to stick a parser on the front which accepts ruby-like input and then runs it a hundred times faster. Who cares? If it’s not a full implementation of Ruby, it’s not Ruby. Ruby is a very hard language to implement, it’s full of nuance and syntax which is very programmer-friendly but very speed-unfriendly. Until you factor all of that in, these benchmarks ain’t worth jack.

And wow ..! A shared memory cache! Finally, Rails can cast off that shared-nothing millstone around its neck. Except, of course, that shared-nothing is one of its main selling points and wasn’t everyone all on board that train until ten minutes ago? If you want to share objects use the database, something like that?

Oh yeah, the database! Maglev comes with a built-in OODB which is going to set the world on fire. Except of course that OODBs have been around for decades, and the world is not on fire. If OODBs were the solution to all scaling’s ills then Facebook would be using Caché, not MySQL. Guess which one they’re using.

I actually have problems with the whole premise of OODBs, at least as they apply in web applications. Great, you can persist your Ruby objects directly into the OODB. What happens when you want to access them from, say, anywhere else? What if you want to integrate an erlang XMPP server? What if you need Apache to reach into it? What if you want to write emails straight into it, or read them straight out? What if you want to do absolutely anything at all which isn’t a part of some huge monolithic stack? Web applications are all about well-defined protocols, standard formats, and because of those, heterogeneous servers working in unison. I’ve heard OODBs have some benefits in scientific and other niche uses, but web applications are about the most mixed environment imaginable. If using an OODB is the answer, what was the question?

Oh, you think I’m just an RDBMS-addicted luddite? Hell no. I eagerly follow and embrace advances in non-relational database technology – just look around this site, where I talk about being one of the first (crazy) people to press Couch DB into semi-production use, using TokyoCabinet and Rinda/Tuplespace for distributed hashtables, and how I’d much rather write a map/reduce function than a stupid, ugly, undistributable slow JOIN. But OODBs? Give me a break.

But oh no. Show them one bullshit-laden presentation and the entire Rails community is champing at the bit and selling both kidneys to ditch all previous Ruby implementations and everything they thought they knew about the persistence layer and embrace some questionable closed-source vapourware, from the guys who brought you that previous world-storming web framework Seaside. What’s that, you’ve never heard of Seaside? I wonder why.

This credulity and blind bandwagon-jumping is the single worst thing about the Rails community.

UUIDs in Rails redux

Tuesday, April 15th, 2008

I have covered forcing ActiveRecord to respect UUID data types in Migrations before. That helps us create our database – now what about in use? We need to create the UUIDs and store them in the database.

These examples all rely on the uuidtools gem, so install that if you haven’t already (and require it somewhere in environment.rb).

1. Setting a UUID using ActiveRecord callbacks

If you don’t need the UUID in the object upon creation but only want to ensure it’s there upon save, do this. Suggestion initially from this page, changes are mine.

We will use the before_create callback to ask AR to add a UUID of our choosing before the record is saved.

Add this to your lib directory:

# lib/uuid_helper.rb
require 'uuidtools'
 
module UUIDHelper
  def before_create
    self.id = UUID.random_create.to_s
  end
end

And now include this in your models:

class Airframe < ActiveRecord::Base
  include UUIDHelper
 
  #my stuff
 
end
>> Airframe.new
=> #< Airframe id: nil, maker_id: nil>
>> Airframe.create!
=> #< Airframe id: "1a82a408-32e6-480e-941d-073a7e793299", maker_id: nil>

2. Initialising a model with a UUID

If you want the UUID in the model before save, i.e. upon initialisation, we have to get a little more fancy:

# lib/uuid_init.rb
require 'uuidtools'
 
module UUIDInit
  def initialize(attrs = {}, &block) 
   super 
   ['id'] = UUID.random_create.to_s
  end
end

Now include this in your models:

class Flightpath  < ActiveRecord::Base
 
  include UUIDInit
 
  # my stuff
 
end
>> Flightpath.new
=> #< Flightpath created_at: nil, id: "5e5bcd63-070d-4252-8556-2876ddd83b54">

Be aware that it will conflict with any other initialisation you do in there, so you might want to simply copy in the whole method if you need other fields upon initialisation:

class User < ActiveRecord::Base
 
  def initialize(attrs = {}, &block) 
   super 
   ['balance'] = 0.0
   ['id'] = UUID.random_create.to_s
  end
 
end
>> User.new
=> #

3. Sessions

All this is very well for your own models, but what about Rails’ inbuilt sessions? By default, they want an autoincrementing integer primary key.

The good news is it’s easy to override. Your migration should look like this:

create_table "sessions", :id => false, :force => true do |t|
  t.string   "session_id"
  t.text     "data"
  t.datetime "updated_at"
  t.datetime "created_at"
end

Now add this to your environment.rb file:

# config/environment.rb
CGI::Session::ActiveRecordStore::Session.primary_key = 'session_id'

And this to your Application Controller:

# app/controllers/application.rb
class ApplicationController < ActionController::Base
 
before_filter :config_session # at the top, if possible
 
def config_session
  session.model.id = session.session_id
end
 
end

And voila, your session store is using the session_id as its primary key. I don’t see any point in using a UUID for your sessions’ PK, but if you want to you’ll find an example override class in:

actionpack/lib/action_controller/session/active_record_store.rb.

Remember to drop any preexisting sessions table in your database, or it will likely complain of null ids when you switch to session_id as your primary key.

Rails not working yet on Ruby 1.9 trunk

Saturday, December 15th, 2007

For those entranced by these benchmarks results and wanting to host Rails on Ruby 1.9 ASAP …

My testing shows Rails 2.0.1 failing on current svn/trunk installs of Ruby 1.9 on MacOSX 10.5.1.

But WEBrick works!

Ruby 1.9 build:

cd ~/src
svn co http://svn.ruby-lang.org/repos/ruby/trunk ruby-1.9
cd ruby-1.9
autoconf
./configure --prefix=/usr/local/ruby1.9
make
sudo make install
cd /usr/local/ruby1.9/bin/
./ruby -v
-> ruby 1.9.0 (2007-12-15 patchlevel 0) [i686-darwin9.1.0]

Rails 2.0.1 installation:

pwd
-> /usr/local/ruby1.9/bin
sudo ./gem install rails
-> Successfully installed actionpack-2.0.1
-> Successfully installed actionmailer-2.0.1
-> Successfully installed activeresource-2.0.1
-> Successfully installed rails-2.0.1
-> 4 gems installed
-> Installing ri documentation for actionpack-2.0.1...
-> Installing ri documentation for actionmailer-2.0.1...
-> Installing ri documentation for activeresource-2.0.1...
-> Installing RDoc documentation for actionpack-2.0.1...
-> Installing RDoc documentation for actionmailer-2.0.1...
-> Installing RDoc documentation for activeresource-2.0.1...
$

All installs nicely …

Attempting to run a Rails app (after install a few more requisite gems using above method):

$ cd /rails/my_1337_app/
$ /usr/local/ruby1.9/bin/ruby script/server
=> Booting WEBrick...
/usr/local/ruby1.9/lib/ruby/gems/1.9/gems/activerecord-2.0.1/lib/active_record/associations/association_proxy.rb:8: warning: undefining 'object_id' may cause serious problem
/usr/local/ruby1.9/lib/ruby/gems/1.9/gems/rails-2.0.1/lib/initializer.rb:224: warning: variable $KCODE is no longer effective; ignored
=> Rails application started on http://0.0.0.0:3000
=> Ctrl-C to shutdown server; call with --help for options
[2007-12-15 07:24:35] INFO  WEBrick 1.3.1
[2007-12-15 07:24:35] INFO  ruby 1.9.0 (2007-12-15) [i686-darwin9.1.0]
[2007-12-15 07:24:35] INFO  WEBrick::HTTPServer#start: pid=3386 port=3000
 
## I request http://0.0.0.0:3000/ ... 500 Internal Server Error
 
Error during failsafe response: can't convert Array into String
127.0.0.1 - - [15/Dec/2007:07:24:52 EST] "GET / HTTP/1.1" 500 60
- -> /

OK, it bombs out trying to actually process a request. But this error is really, really fast! I am actually serious saying that, the error *is* really fast.

Mongrel installation fails:

$ sudo ./gem install mongrel
Password:
Building native extensions.  This could take a while...
ERROR:  Error installing mongrel:
        ERROR: Failed to build gem native extension.
 
/usr/local/ruby1.9/bin/ruby extconf.rb install mongrel
creating Makefile
 
make
gcc -I. -I/usr/local/ruby1.9/include/ruby-1.9/i686-darwin9.1.0 -I/usr/local/ruby1.9/include/ruby-1.9 -I.  -fno-common -g -O2 -pipe -fno-common  -c fastthread.c
fastthread.c:13:20: error: intern.h: No such file or directory
fastthread.c:349: error: static declaration of ‘rb_mutex_locked_p’ follows non-static declaration
/usr/local/ruby1.9/include/ruby-1.9/ruby/intern.h:556: error: previous declaration of ‘rb_mutex_locked_p’ was here
fastthread.c:366: error: static declaration of ‘rb_mutex_try_lock’ follows non-static declaration
## etc etc etc

So webrick’s all we have for now.

You can track the state of edge Rails’ 1.9 readiness in this ticket on the Rails Trac. Plugins, though, will be another matter, although some fixes are pretty easy; an early failure I’ve seen is with plugins using File.exists?('file') which is of course deprecated in 1.9 in favour of the far, far superior File.exist?('file').

I like using the 1.9 console, though – it really does feel snappier, especially to load irb!

Migrations much improved in Rails 2

Friday, December 14th, 2007

I hadn’t been using migrations much in Rails 1.x. I just didn’t like the workflow – it was too clumsy, and I got annoyed at writing out the files. The workflow in 1.x was as such:

1.

script/generate migration MigrationName [--svn]

2. go and manually edit 00x_MigrationName.yml in /db/migrations to reflect your desired database changes, both up and down:

class AddCreatedAtToStaff < ActiveRecord::Migration
  def self.up
    add_column :staff, :created_at, :datetime
  end
 
  def self.down
    remove_column :staff, :created_at
  end
end

3. rake db:migrate to apply the changes to the local database.
4. svn commit and cap deploy:migrations to apply the changes to the remote database

Too long – especially step 2. I knew I should do it in principle, but CocoaMySQL is right there – especially if you make several changes in the space of a few hours. In theory it’s best practise – but in actual practise someone as lazy (and impatient) as me tends to just do it directly in the DB, then eternally put off that day where they’ll finally “move to migrations”.

It’s not even the laziness – it’s the “flow”. I’m doing other things, I’ve realised I need this field. I don’t want to stop what I’m doing and write out a damn YAML file! I want the field there, right now. Too demanding? Maybe, but like I said, the DB is right there, and the difference between a 5 second edit and 2 minutes writing the file is a lot of lost concentration.

And various other solutions, such as auto_migrations, seemed good but in practise are too flaky and a dangerous, unsupported road to take.

Enter Rails 2.0, and migrations are far, far better. The core Rails principle of “convention over configuration” is in full effect here, with excellent results.

Now the process of adding a migration is as such:

1.

script/generate migration add_created_at_to_staff created_at:datetime [--svn]

Note the convention at work here. You’re implicitly telling Rails the table to use, in this case “staff”, and the field you’re adding – in this case one of Rail’s magic “created_at” fields. You then explicitly write the fields out, which you’d have to do anyway in CocoaMySQL or similar, or manually at the mysql command line.

2. rake db:migrate to apply the changes to the local database.
3. svn commit and cap deploy:migrations to apply the changes to the remote database

That’s only one step less, but it was the biggest and most annoying one. The actual creation of the migration file and addition to svn is now a single brief, easy-to-remember line. This is now terse enough and convenient enough to enter my personal development workflow, a very welcome improvement to the framework, and an excellent demonstration of Rails’ core principles in action.

Rails: multiple assets servers with 1 server using Apache

Wednesday, December 5th, 2007

Rails 2 has added the ability to automatically serve static public files – “assets” – from multiple servers, to speed up loading times. This is well worth implementing and will speed loading times considerably. And it’s very simple to do. You’ll need Rails 2.0RC1+ for this. Obviously your implementation details may differ, especially filepaths on the server.

Step 1: In rails_app/config/environments/production.rb:

config.action_controller.asset_host = "http://assets%d.my_kewl_domain.com"

Step 2: In my_kewl_domain.com.zone

www              IN              A       31.337.31.337
 
assets0         IN              A       31.337.31.337
assets1         IN              A       31.337.31.337
assets2         IN              A       31.337.31.337
assets3         IN              A       31.337.31.337

Step 3: In httpd.conf

<VirtualHost 31.337.31.337:80>
ServerName assets0.my_kewl_domain.com
ServerAlias assets1.my_kewl_domain.com, assets2.my_kewl_domain.com, assets3.my_kewl_domain.com
DocumentRoot /www/my_kewl_domain.com/current/trunk/public
VirtualHost>

Note this is a separate vHost from your main application. Note also that it is looking straight into the /public directory of your app.

svn up, cap deploy, restart %w[named httpd mongrel] and you’re done.

Including the coming Ruby 1.9 (apparently still on course for sometime this month), this is a nice free speedup for your application.

Rails 2 – Too Much Magic?

Tuesday, December 4th, 2007

So I’ve been watching the RailsCasts episodes going through the latest features in Rails 2. I’ve read about them before – but it’s good to see them demonstrated by an expert.

But man – some of this is too much. Don’t get me wrong, I like magic – if I didn’t like magical free stuff I wouldn’t be using Rails at all. But there comes a point where there’s enough magic, and more just trades off too much for too little – diminishing returns, as it’s called.

Take this screencast: Simplify Views with Rails 2.0. It shows very clearly what I’m talking about.

Now some of this is good shit. The ability to write this:

<% div_for_product do %> # is this supposed to be <%= ?
<% end %>

And have it automatically generate div IDs and the like is great. That saves some nasty looking code – assuming it works properly, stays unique across multiple includes, etc. Good shit. A worthwhile addition.

But this:

# from
<%= render :partial => 'product', :collection =>  %>
# to
<%= render :partial =>  %>
# and for a single case
<%= render :partial =>  %>

I think this is bad news. Suppposedly Rails is going to look inside that instance variable, decide whether it’s an object or an array, and automatically use the correct partial, multiple times if it’s an array .. why? To save what, 15 keystrokes?

I would argue the “Rails 1.2″ way of doing this is about the most concise way to write this function imaginable while still maintaining decent at-a-glance understandability. It’s not so long and unweildy, is it? And you can see what’s happening instantly – ok, now render a bunch of partials named ‘this’ using whatever was in this . The word “collection” is a nice touch which helps you remember there’s a number of them.

The second one? Its behaviour literally changes according to what’s in that instance variable. Is it really that much nicer to look at that it’s worth losing the information necessary to see, at a glance and unambiguously, what it’s doing?

There’s more, and worse: forms, where we go from this

<% form_for :product, :url => product_path(), :html => {:method => 'put'} do |f| %>

to

<% form_for  do |f| %>

And if you’ve used the strict Rails conventions, mapped all your resources in routes.rb, are using the strict controller layout as demonstrated in the resources scaffold generator and have a picture of DHH as your desktop background, it’ll know what to do and work.

Now, at this point you might be saying “but you don’t have to use these features – it’s only if you follow the convention, which is the whole point of Rails! Follow the convention, get free benefits – it’s been like that since day 1″!

I agree with that, kind of. But there is such a thing as too much convention, and too many limits, and – this one’s my point – too much “magic”. I know it’s a bit silly to talk about “vendor lock-in” in the context of a Rails project but following “The Rails Way” is beginning to smell strangely similar. And these extraordinary efforts to save a few keystrokes, ONCE, at the expense of readability (and no doubt speed!) are beginning to seem less like cleverness and more like Koolaid-driven idealogy.

We’ve been told, again and again, that it’s all about “Beautiful Code”. And you know, I appreciate that up to a point. But when the code is getting less readable and more dependent on arbitrary chains of “convention dependence” which may or may not work (and which may or may not change in 3.0!!) then I start getting cold feet.

You know what? I’m working on *a* web site, not *the* web site like these changes all seemed designed for.

Predictions of a coming fork, anyone? “Rails Lite”?

Oh, and despite the endless hours which must have been poured into this kind of useless bloat, YAML handling in rails is still UTF8-unsafe.

UPDATE: Sure, you can just not use these features. But the codebase bloats, the execution speed is slower – and it’s not like it was fast to begin with. I argue these new features make the code more difficult to read and learn – hurting the Rails community as a whole, reducing numbers of developers, marginalising the framework – as someone with a considerable investment in that framework I am a stakeholder in Rails, and it’s from that position I comment.