Posts Tagged ‘ruby’

Incredibly impressed with Ruby 1.9.2

Friday, December 4th, 2009

So I’ve got this web app. It’s pretty “heavy” and it does a lot of image manipulation. It’s a bit of a pig, to be honest.

Under Ruby 1.8.7 and 1.9.1, you can reliably expect the memory usage of this app to grow .. and grow .. and grow. I have seen it reach 900M before I shut it down (manually on development machine – in production monit would have killed and restarted it long ago).

Memory usage under 1.9.2pre1? 72.7M and it’s been running for hours. It got as high as ~130M or so at one point, but then – astonishingly – GC actually worked and released unused memory. Will wonders ever cease?

Performance is up around 20% on 1.9.1, too.

Memory usage, and the constant leaking/growth thereof, is my number one daily problem with Ruby. The preview release of 1.9.2 seems to have solved it. To say I’m happy about this development would be an understatement. I am looking forward to deploying on 1.9.2 ASAFP and will do so, probably, upon the release of preview 2 (I have noticed no stability problems whatsoever).

update: 12 hours later it’s gone down to 65.5M. Praise the Ruby Gods!

Maglev hype train derailed at last

Saturday, November 21st, 2009

Ah, Maglev. It seems that only yesterday I was ripped to shreds by a furious mob for daring to question your public claims of drastic speed gains over every other Ruby interpreter.

And so imagine my delight today, when the first alpha release is finally available! Sure, it’s only an alpha release, so it’s not going to be quite as good as the final product – but surely it will live up to the claims in the RailsConf 2008 demonstration? I mean, it was that good even then! Surely it must be even better now!

Anyway, I obviously had to install it and put these claims to the test.

I followed the instructions on – with a couple of changes, such as that I use .profile instead of .bashrc.

I decided to test with as close to a real world library as I could. That post mentions that Maglev supports Sinatra, so I thought that would be an ideal test. Then again, it also says it supports RubyGems but I couldn’t get that to work at all. C extensions are out too, so we’ll be using Webrick.

To get around the RubyGems problem, I manually downloaded Rack and Sinatra and placed the contents of their lib directories into a single folder. I then created a trivial sinatra app as shown:

require File.expand_path(::File.dirname(__FILE__)) + '/sinatra.rb'
require File.expand_path(::File.dirname(__FILE__)) + '/rack.rb'
 
get '/' do
  "Maglev Rocks!"
end

I decided to run good old ab against it. My exact command was:

ab -kc 10 -t 10 http://127.0.0.1:4567/

Because I have other servers installed in Ruby 1.8.7 and 1.9.1, I forced sinatra to serve using webrick as follows:

ruby1.9 testmaglev.rb -s webrick

Considering the past hype of order-of-magnitude speed hikes, I was ready to be blown away. Are you ready to be blown away? Oh yeah baby, 50 times faster comin’ right up!

RESULTS

Ruby 1.8.7: Requests per second: 131.38 [#/sec] (mean)
Ruby 1.9.1: Requests per second: 144.77 [#/sec] (mean)
Maglev: Requests per second: 97.78 [#/sec] (mean)

Well golly gosh. Look at that. It’s not faster after all – in fact it’s climbing the same steep performance hill ever other interpreter has had to climb, Smalltalk magic notwithstanding. How about that.

I said the benchmarks shown at RailsConf were bullshit and would not even remotely reflect the real world performance of the final product. I said that until the interpreter implemented all of ruby, it was not ruby, and it was worthless to measure its performance. I poured skepticism upon the notion that a new interpreter, just by dint of some doubtfully superior Smalltalk heritage, could leapfrog all previous contenders with a two-order-of-magnitude performance boost.

Looks like I was right. But don’t take my word for it, run the tests yourself. I will be accepting contrite apologies in comments … ;)

script/memcached

Wednesday, November 18th, 2009

The inimitable Wincent recently referred to a memcached script he had written that toggled whether memcached is running or not. I thought that was a great idea so wrote my own and in the spirit of comparing dicks I thought I’d post it here.

#!/usr/bin/env ruby
 
 = ::File.expand_path(::File.dirname(__FILE__)) + '/../tmp/memcached.pid'
 
def process_id
  File.exist?() ? File.read().to_i : false
end
 
def running?
  if process_id 
    Process.kill(0, process_id) == 1 rescue false
  else
    false
  end
end
 
def start!
  print 'starting memcached ...'
  system "memcached -d -P #{} -l 127.0.0.1"
  sleep 0.5
  print "started with pid #{File.read}"
end
 
def stop!
  print 'stopping memcached ...'
  Process.kill('INT', process_id)
  sleep 0.5
  File.delete() if File.exist?()
  print 'done.'
  puts
end
 
def ensure_running
  running? ? puts('already running.') : start!
end
 
def ensure_stopped
  !running? ? puts('not running.') : stop!
end
 
def toggle
  running? ? stop! : start!
end
 
case ARGV.first
when 'start'
  ensure_running
when 'stop'
  ensure_stopped
when 'toggle'
  toggle
when 'status'
  running? ? puts('running') : puts('not running')
else
  toggle
end

I love writing these kinds of scripts in Ruby. It’s perfect for it.

Cool Ruby one-liner

Tuesday, November 17th, 2009

Is the process 1234 running?

Process.kill(0, 1234) == 1 rescue false

Image watermarking with CarrierWave

Friday, November 13th, 2009

I love the library for handling image uploads. But how can we do watermarks?

Easy. Prepare a transparent PNG with your logo, then add the following rule to your Uploader class:

  def watermark(path_to_file)
    manipulate! do |img|
      logo = Magick::Image.read(path_to_file).first
      img = img.composite(logo, Magick::SouthEastGravity, Magick::OverCompositeOp)
    end
  end

Now call it like this:

  version :medium do
    process :resize_to_fill => [500,500]
    process :watermark => ["#{RAILS_ROOT}/public/images/logo.png"]
  end

Bingo, a beautiful watermark.

Disqus – a single point of failure

Tuesday, October 27th, 2009

I do not understand why bloggers adopt services such as Disqus.

It is not hard to put comments on a blog. Really, it is not hard at all.

It is not especially hard to block spam. This blog may not be popular but it’s over 4 years old and gets a LOT of spam; sometimes thousands a day if some bot gets into heat. All is dealt with. It’s not effortless but it’s pretty minimal.

So why do people wilfully pervert the core strength of the internet’s redundant peer-to-peer structure and adopt a centralised commenting system for their independent blog? I just don’t get it. Especially the so-called experts at Ruby Best Practise who apparently can’t even run a fricking comment system. I qualify for “Ruby Worst Practise” most of the time and I have written and run several.

Here’s “Ruby Best Practise” as I experienced it today:

Best practise right here folks

Thanks for losing my comment, matchless professionals at Ruby Best Practise.

A couple of Mongrel forks

Wednesday, September 23rd, 2009

Mongrel, still probably the most-used workhorse for serving Rails applications, has been abandoned for a long time – first by its creator, then its so-called maintainers. It’s been a farce, actually, that this important piece of infrastructure has been tossed around in this casual manner. I still rely on Mongrel in several ways, and I know others do too.

Anyway, I wanted a gem with the Ruby 1.9 patches in it, so I made one . Nothing but 1.1.5, plus the Ruby1.9 patches (thanks ), there will be no ongoing changes – all I care about is that I can do

sudo gem1.9 install sho-mongrel

update: fucking github has turned off gems so this doesn’t work anymore.

and it works.

If you’re looking for more progress than that, phurley is actively improving mongrel in (which I didn’t know about before I made mine). He is also making it available as a gem, and looks to be actively improving and modernising the ancient battleaxe.

_why smashes Ruby Community Dummy Spit record

Thursday, August 20th, 2009

Ruby Community Dummy Spits

Fun with Growl

Wednesday, June 24th, 2009
(1..100).each {|num| Growl.new(:message => "I am number #{num}!").run}

Change the number to 1000, as I did, and don’t plan on seeing your desktop for a while. Add :sticky => true for extra “fun”.

Snow Leopard only has Ruby 1.8.7

Monday, June 22nd, 2009

A bit of a disappointment – I’d been hoping they’d go with 1.9.1.

Twitter was useful just then

Wednesday, June 10th, 2009

Wow, I finally found a use for Twitter. RubyForge was (is) down once again, and a confirmed I am not the only person experiencing the problem, perhaps saving me a few minutes of tinkering.

I don’t mind this usage pattern; people “tweeting” about disasters, downtime or other anomalous events is useful. It’s the everyday crap and people using the service as a particularly crappy RSS feed that I don’t like.

Anyway, RubyForge is crtical infrastructure in the Ruby world and shouldn’t be going down like this. Hope they make higher availability a priority. Imagine if you were doing an emergency server reinstall right now. Who knows how long you’d have to wait?

UPDATE: It’s been down over 7 hours now, a really serious failure. Thinking about creating my own mirror when it comes back up, wonder what the storage requirement would be?

Simple Couch comet listener with EM

Monday, May 25th, 2009

So couch trunk has now got the long-awaited comet update-tracking functionality, obsoleting pretty much every other way of doing update notification at a stroke. I’ve been looking forward to this for a while – I want to throw an EM daemon or two on the comet URL; they’ll listen for changes and do cache invalidations/search index additions asynchronously. Yes, I could just expire the cache synchronously upon save but that gets very fiddly – I want to store the seq number in the cache so the expiration/update sequence is fully replayable. Doing that synchronously would involve another query to the DB to find the current seq, inviting race conditions – forget it. Also, I need to do message dispatch to diverse clients who might not be using the web server at all; I need all updates to flow through a router, and that can’t be in the web app.

Anyway, here’s a simple EM script which listens to the (charmingly undocumented) comet URL and does whatever you want with the updates. If you were doing anything complex you’d probably want to send the process off into a defer operation.

require 'rubygems'
require 'eventmachine'
require 'socket'
require 'json'
 
module CouchListener
  def initialize sock
     = sock
  end
 
  def receive_data data
    data.each_line do |d|
      next if !d.split("\"").include?("seq")
      puts "raw: #{d.inspect}"
      begin
        json_data = JSON.parse(d)
        puts "JSON: #{json_data.inspect}"
        puts "updated: id #{json_data["id"]}, seq #{json_data["seq"]}"
      rescue Exception => e # TODO definitely do not want to rescue in production
        puts "JSON parse failed with error #{e}" 
      end
    end
  end
 
  def unbind
    EM.next_tick do
      data = .read
    end
  end
end
 
CURRENT_SEQ = "0" # you'll want to replace this with whatever is current
DB_NAME = "test_comet"
 
EM.run{
  $sock = TCPSocket.new('localhost', 5984)
  $sock.write("GET /#{DB_NAME}/_changes?continuous=true&since=#{CURRENT_SEQ} HTTP/1.1\r\n\r\n")
  EM.attach $sock, CouchListener, $sock
}

Downtime

Sunday, May 24th, 2009

Seems the hosts of this server, Softlayer, experienced a power failure in the data centre in which I’m located, subjecting this machine to an unexpected reboot. Now, I should have tested this before – one of the traps with running an OS which basically never needs to be rebooted is that one rarely tests a reboot – but it seems I had failed to set httpd and, worse, named, as chkconfig on!

I don’t know how that slipped my mind. Everything else came back up just fine. I guess I just assumed, at the time, that httpd and named started by default – I don’t remember. Anyway, I really should have checked sometime in the year since this server was last rebooted. Ah well. Anyway, all good now.

In other news, I’m thinking about upgrading this server – it’s getting a bit long in the tooth now and for the same money I can probably get a beefier machine. That in itself might not be worth the hassle but I’m also considering migrating from RHEL to a debian system – perhaps Ubuntu Server, although I’ll try it out locally first. I have no complaints about RHEL5 – it’s an excellent distribution, highly reliable, and I haven’t had a single problem with it. But it seems debian-based linux is better supported in many new projects and I’m sick of feeling like I’m the only one trying to run $interesting_new_software on a RHEL system. Well, let’s see.

In other upcoming changes, I’m going to switch to Ruby 1.9.1 one of these days – just got to clean up one legacy app. I guess I can run 1.9 in parallel, but I’d like to do it all at once. To the future!

Jeweler for Gems, DaemonKit for daemons

Wednesday, May 20th, 2009

I hate writing and maintaining code which does extremely common things, so I’ve gone through a few rubygem management libraries over the years. Most recently I’m been using a library called Bones – it was OK, I guess. Better than doing it myself.

Recently, however, I’ve come across – a clean, simple solution which really fits my needs. It’s git-aware – any files git knows about, it knows about, and includes effortlessly. It automatically handles binaries, it does version updates, it doesn’t require you to include numerous files of its own – just a short addition to an existing projects’ Rakefile will do the trick. In short, it’s exactly what I want and nothing more, and I’ve adopted it for my gem needs.

Another fantastic RubyGem I’ve stumbled across lately is , which is exactly what the name suggests – a kit for easily generating daemons in Ruby. It’s a little more complex than Jeweler, obviously, but it generates a logical and familiar directory structure – if you’re proficient in Rails you’ll instantly grok what everything is for. Putting together a daemon in Ruby has always been relatively easy, but somewhat time-consuming as you build your own structure with all the little bits you need – I find DaemonKit’s defaults to be pretty much exactly what I want as a base, and you can be up and running in minutes. Plus, since half the daemons I write these days seem to have something to do with AMQP, its explicit catering to this use case is particular welcome.

Combine the two and you’re in gem daemon heaven. Check them out.

Thin: Ruby 1.9.1 vs. Ruby 1.8.6

Friday, March 27th, 2009

Thin is the only standalone web server that’s compatible with both 1.8 and 1.9, so let’s test how they go head to head.

I did just the most cursory of tests, using a recent web site I’d been involved on, running Rails 2.3.2. Tests are with and without the DB turned on, which slowed us down by a factor of 10 and wasn’t testing Ruby so much as this dog-slow laptop disk anyway.

DB off:

1.9.1: 212.15 req/sec
1.8.6: 178.64 req/sec

DB on:

1.9.1: 23.53 req/sec
1.8.6: 19.49 req/sec

So for this particular (pretty simple) site, we get about a 10-15% speed increase, which is surprisingly little – it “feels” faster than that. I look forward to testing 1.9.1 further on more complex installations, and with Passenger. Locally, I’ve switched over to 1.9.1 for most development now, and will move over completely once Rails gets its fucking act together.

Handy abbreviation generator, bundled with ruby-core

Wednesday, March 25th, 2009

Anyone else think this is a pretty weird inclusion in the ruby core distribution?

Dead simple reload! for Ruby

Wednesday, December 31st, 2008

One of the things I miss most about Rails when working in gems is the reload! function, which rebuilds the environment to update anything that’s changed since the last save. Well, I wanted to recreate that functionality, but hopefully in a really simple way.

The good news is, it’s actually pretty easy. I have two ways of doing it, pick the one you like best.

module AutoReload
 
  @mes = {}
 
  def reload!
    diffs = AutoReload.differences # we can only call it once per reload, obviously
    if diffs.size > 0
      diffs.each {|f| Kernel.load(f)}
      puts "reloaded #{diffs.size} file(s): #{diffs.join(', ')}"
    else
      puts "nothing to reload"
    end
  end
 
  def self.update_modtimes
    $".each do |f|
      @mes[f] = File.mtime(f) if File.exists?(f)
    end
  end
 
  def self.differences
    oldlist = @mes.clone
    AutoReload.update_modtimes
    newlist = @mes.clone
    oldlist.delete_if {|key, value| newlist[key] == value }
    oldlist.keys.uniq
  end
 
end
 
include AutoReload

You will then need to initialise it somewhere after all your requires. AutoReload.update_modtimes will do the trick. If you can’t manage that, it will only work properly after the first time you use reload!.

This is the way to do it if you have a lot of files, I think, since it maintains a list of what was changed when, and then only reloads changed files.

Note that it’s not perfect. It will only be able to find files which are in local path, ie won’t be able to reload gems. However, that’s all I need for now.

The next way is even simpler, since it doesn’t bother to maintain a list – it just blindly reloads everything it can:

def reload!
  diffs = []
  $".each {|f| diffs << f if File.exists?(f)}
  if diffs.size > 0
    diffs.each {|f| Kernel.load(f)}
    puts "reloaded #{diffs.size} file(s): #{diffs.join(', ')}"
  else
    puts "nothing to reload"
  end
end

As you can see, this just blindly reloads everything it can find. Probably not the best if you have a lot of constants etc, but for a simple project could be just the ticket. The good news is you don’t need to initialise it. if you have a lot of files you probably should return "OK" or something, else you’ll have pages of reloads scrolling past.

Let’s bear in mind that this kind of trick is always a bit of a hack. Kernel.load() has no ability to unload anything, even if it doesn’t appear in the file anymore. All it can do is overwrite. If you break your code by deleting something important, then reloading using this kind of trick won’t show it up – the object is still there until you reload ruby itself. It’s a convenience thing only, so don’t rely on it too much, do a full reload once in a while.

However, for my use case – making a lot of small changes and working in a very interactive manner with irb – this is a real time-saver, hope you find it useful too.

If you’d prefer it to happen automatically, rspec-style, there is a gem available which will do this for you here which basically does the same thing, just every 1 second instead of manually.

Weird output from Digest::MD5 in ruby

Monday, November 24th, 2008

Any Ruby programmers who are reading this, I’m experiencing a strange issue regarding Digest::MD5. Let me show you:

>> require 'digest/md5'
=> true
>> Digest::MD5.digest "Les Rhythmes Digitales"
=> "\213U\3601\260%\267-\343(\213I\030\347"

What the fuck is that huge escaped thing? A unicode issue?

Check out the same from bash:

$ md5 -s "Les Rhythmes Digitales"
MD5 ("Les Rhythmes Digitales") = 8b55f031b0253c6ab72de3288b4918e7

Now that looks more like what I expect from an MD5 hash. Is Digest::MD5 mangling the text into some kind of weird invalid unicode?

Let’s try with $KCODE set:

>> $KCODE = "UTF8"
=> "UTF8"
>> require 'digest/md5'
=> true
>> Digest::MD5.digest "Les Rhythmes Digitales"
=> "\213U?1?%\267-?(?I\030\347"

Great. Any different in 1.9?

irb(main):003:0> Digest::MD5.digest "Les Rhythmes Digitales"
=> "\x8BU\xF01\xB0%\xB7-\xE3(\x8BI\x18\xE7"

Different again. At least I can see the characters in there, though. This is causing some pain.

Am I doing something hopelessly wrong? Somewhere in all these, some character encoding crap is going down. I can’t believe I’m the only one having these problems, and they render it difficult to use hashed passwords. I am working around the issue by shelling out to bash for now, but would like to get it fixed.

UPDATE: About 2 minutes after writing that, I realised I need to use Digest::MD5.hexdigest, not plain digest. I have no idea what the difference is supposed to be, but oh well, lesson learned. Apparently writing complaints on this blog helps me solve problems, so expect it to continue.

>> Digest::MD5.hexdigest "Les Rhythmes Digitales"
=> "8b55f031b0253c6ab72de3288b4918e7"

Milliseconds Since Epoch UTC

Friday, November 21st, 2008

That’s it! I have had enough. I have had enough of DateTime, time strings, datetime strings, Time.parse(), MySQL time, JSON time, CouchDB time, Ruby time, system time and all the rest of it.

I have come to realise that there is one, and only one, appropriate way to store time so everything can understand it without endless string conversion problems, and that is in a numeric format of milliseconds since Epoch UTC.

The only appropriate time to convert from milliseconds into a human-readable string is upon presentation to an actual human – ie, in the View. Or, if you like, store a second time field in any records you save or pass around – just make sure your program doesn’t care about those.

This revelation comes from YET ANOTHER journey into ISO document land as I realised that not only do I have no idea how to store milliseconds in a JSON date/time string, but neither does anyone else. RIGHT!! THAT IS IT! From now on, dates are an integer.

Maybe.

Anyway, here’s some notes on getting millisecond-precision time references in and out of Ruby and JS, both of whose time classes I am no fan of, more for me than anything else..

Ruby example:

 
# Getting milliseconds since Epoch out of Ruby:
 
time_float = Time.now.to_f
time_ms = (1000* time_float).to_i
 
#writing
 
>> t = Time.now.utc
=> Thu Nov 20 23:01:32 UTC 2008
>> time_float = t.to_f
=> 1227222092.50133
>> time_ms = (1000* time_float).to_i
=> 1227222092501
 
#reading
 
>> n = Time.at(time_ms / 1000.0).utc
=> Thu Nov 20 23:01:32 UTC 2008
>> t
=> Thu Nov 20 23:01:32 UTC 2008
 
# check we retained usec through the process
 
>> n.usec # note we lost microsecond precision, this is intended
=> 501000
>> t.usec
=> 501333

Reducing Ruby Time.now to millisecond precision:

time_float = Time.now.utc.to_f
time_msp = ("%0.3f" % time_float).to_f

That doesn’t have much to do with time, I just thought it was cool. Note that apparently doing precision reduction with strings is faster.

Javascript

// Reading an ms-since-epoch time:
js> date_from_above = new Date(1227222092501)
Fri Nov 21 2008 10:01:32 GMT+1100 (EST)
js> date_from_above.toUTCString()
Thu, 20 Nov 2008 23:01:32 GMT // note same as above 
 
// Make a new current date
js> t = new Date
Fri Nov 21 2008 10:28:12 GMT+1100 (EST)
 
// Output date object in milliseconds-since-epoch format
js> t.valueOf()
1227223692041

Review: Scaling Ruby by Envycasts / Gregg Pollack

Thursday, November 20th, 2008

So, there’s been some Internet Drama over the for-profit video release of a presentation from RubyConf 2008 by Gregg Pollack of Envycasts. Here are some reactions and comments collected from various sources over the last ~24 hours.

My own (highly negative) review:

Having watched the offending video yesterday, it is pretty easy to see why the guy can’t make money as a developer and so is trying to jump on the “screencast money train” a la Peepcode. I would question if it’s even worth downloading for free.

It’s nothing but a superficial high level tour of threads, messaging, and profiling, with some mildly interesting speed tips at the end which should really have been a single blog post. The “research” the guy did is evident all right – as in, it’s obvious he just looked it all up for the presentation and has never actually used any of this stuff. The “tips” he gives – when he gets around to giving any – are unremarkable at best and downright wrong at worst – he actually seems to recommend using RSS as an interprocess message queue, which is a really stupid idea.

He also includes a video overlay of himself giving the whole speech down the bottom of the screen, for that little “distracting touch of narcissm”. He pronounces “memoize” to rhyme with “turquoise”, and spends the first 3 minutes of a 40 minute paid presentation making an unfunny joke.

On the whole I think the guy might actually have done everyone a favour by making the video pay-only. The presentation is not even worth the 40 minutes it takes to watch, let alone $9, and the less people who think that RSS is a good way to implement distributed processing the better.

None of this excuses the presenter’s actions re. RubyConf but in this case, I think it will be a self-correcting problem. There is some expectation that someone demanding money for their training videos might at least have some experience working with the subject on which they present. I expect this video to destroy the clown’s professional reputation just as surely as his money-grubbing actions have destroyed his personal credibility.

I am hardly alone in this assessment. Let me quote several other people from a number of sources, anonymous because I have used them without permission. That said, if you have a problem with being quoted anonymously, let me know and I will remove your comment immediately.

Couldn’t even be bothered watching it for free:

Ugh…

Couldn’t get through it. The music was just too annoying. So I skimmed. Probably spent 4 minutes watching it.

Yes, all pretty superficial stuff and nothing really useful in there.

I don’t really like those tutorials which are just a tour of add-ons/plug-ins/gems etc written by other people (and as you know, Ruby people are all too ready to embrace other people’s code). I’m more interested in seeing interesting, original and innovative code.

Another prominent developer is unimpressed:

there is nothing in this talk which cannot be discovered in a couple of minutes using google, or by reading a couple of howtos

An IP sleuth points the finger …

The “speed tips” at the end are stolen directly from igvita.com, without attribution of course, and the ruby threading graphs look suspiciously similar too. This video is basically nothing but a visual presentation of the content from someone else’s blog – unpaid, of course. To pay for it would be to encourage this kind of blatant theft.

Tsk, tsk.

UPDATE: More:

I had a strange feeling of deja vu when I saw this talk. I felt like Id seen it before, somehow, and recently. later that day I logged on and was trying to find out where I’d read it before. Turns out the whole thing was lifted from Igvita.com, with minor changes. No credit given at all. If I was the guy from igvita.com, I would be pissed.

Storage space conscious:

Not only is this video not worth 40 minutes of my time or $9 of my money, it’s not even worth 200M of my hard disk space. Deleted.

A message of support:

I fully support Mr Polack’s actions in this matter. Anyone dumb enough to 1. buy this POS and 2. implement its suggestions (RSS? Are you fucking KIDDING me?) deserves to have their money stolen and their app grind to a messy halt. Polack is doing us all a favour, why so harsh?

A compelling argument there.

UPDATE 2:


Surprise!