03/2/14

TBTTY: Freedom February

[This post was originally sent to the TBTTY list (4), but I thought others might enjoy it too so it's reposted here]

My favorite thing this year is February.

Last year, Bonanza started a tradition of swapping Seattle for somewhere tropical every February. We call it “Freedom February.” Its birth came from asking ourselves some pointed questions:

  • We’re an Internet company, we’re small-ish (1), why do we have to work in the same place every month?
  • How will it impact productivity if we give our team the freedom to work on their own schedule?
  • What was that bright, glowy thing that seemed to hang in the sky all day, making things warm and pleasant last Summer? (2) Can we hunt it down?

Curiosity got the better of us, so we booked some plane tickets, a month of lodging in Costa Rica, and we did it. It was more fun than I’d hoped for. There were monkeys.

Inline image 1

In response to our first question, we learned that there was really no reason we couldn’t get away with doing this every February. All we need is a connection to the Internet and this business can run itself fine. We were all friends already, but this trip reinforced our friendships and our combined sense of mission. To the second question, we got more code checked in that month than we did the month before or after, in part because of a collective spirit of working nights and weekends, even though we worked less during daylight hours. As a nice kicker, our sales also jumped about 15% that month. There was a lot to like about this idea.

This brings us to February 2014. Even before we departed, it had the makings of a TBTTY email when this victory dropped into our lap:

Inline image 2

In their annual survey of more than 12,000 sellers (the biggest/only third-party survey on marketplaces), Bonanza beat out Etsy, eBay and Amazon in whether sellers would recommend it to a friend! We had scored in the top four of this survey during previous years, but this was the first year we won both “Most Recommended” and “Top Score Overall.” Etsy, eBay and Amazon finished 2nd, 3rd and 4th to us in cumulative score. February was off to a fine start.

And then, the trip. This year, since Bonanza has a new tyke in tow (our developer’s criminally adorable daughter, Kira), we decided to visit the north shore of Oahu, where we’d have easy access to modern health care if anything important came up. Here’s Kira enjoying Freedom February 2014:

Inline image 3

In terms of Februarys, this year was even better than last year. Bonanza has grown since 2013, so we had more people = more opportunities for epic BBQs = more fun. The availability of our own hot tub this year was a nice touch too.

Inline image 4

Enjoying BBQ dinner

Inline image 5

A good time was had by all this year. But my favorite part of the trip was seeing how happy my parents were to be in Hawaii. Until I started dragging them around the world with me a couple years ago, they hadn’t been on a vacation since their honeymoon (25+ years ago). My mother in particular has more energy and enthusiasm than any of my 30-something friends, and her happy-go-lucky spirit becomes infectious in a group setting. Here’s her with my pa in front of our house:

Inline image 6

Even when I was sitting upstairs with ocean surf drowning out most sound, I could still hear my mom’s laughs echoing above everyone else’s yapping downstairs. It made me realize that I probably have her to thank for much of my willingness to fail (in jokes, in crazy ideas, in business). When you grow up with someone that thinks most everything you say is hiiii-larious, I think there’s a bit of irrational confidence that sticks to you, failures be damned.If any of you run a small-ish company and would like to give this idea a try, DM me and I’ll share what I’ve learned from my two years experience. Generally speaking, it’s less work than I had expected (3). I think “inertia” was the biggest reason it took us a couple years to try it out, but now I think that this is exactly the sort of benefit a startup deserves to compensate for all the hard work we put in throughout the year.

(1) About 15 people, many of which are remote, some part-time. So far the participants have just been folks working out of our Seattle office, but I hope to keep slowly growing the trip as long as we can employ hard workers who GSD with minimal oversight.

(2) I’m not telling. You’re just going to have to try this yourself or wait a few months and hope.

(3) The hardest part is finding the big enough+affordable lodging that’s conducive to work (i.e., has a view of the ocean). Besides that, everything else tends to fall into place, especially if you get your team involved in the planning process.

(4) Still hasn’t been posted to the list though after posting it there yesterday, so not sure exactly how “alive” the list still is

11/26/13

Rails 3.2 Performance: Another Step Slower

Having a large codebase means that we don’t upgrade our version of Rails very often (we’re averaging once every two years, with about 1-2 weeks of dev time per upgrade). Every time we do upgrade, though, one of the first things that I’m curious to inspect is the performance delta between versions.

For our previous upgrade, I documented our average action becoming about 2x slower when we moved from Rails 2.3 to Rails 3.0, with an action that had averaged 225ms climbing to 480ms. Luckily, in that episode we were able to pull out some tricks (GC tuning) such that we eventually got the same action down to 280ms. Still around 25% slower than Rails 2.3, even implementing fancy new tricks, but we could live with it.

When we finally decided we had to move from Rails 3.0 to 3.2 to remain compatible with newer gems, I was understandably anxious about what the performance drop was going to be based on our past experience. With the numbers now in hand, it looks like that apprehension was warranted. Here is the same action I profiled last time (our most common action – the one that displays an item), on Rails 3.0 before upgrade:

Most common action before upgrade, averaging 301 ms over 3 hours time window

 

And here it is now:

After upgrade, same time period as last week, averaging 423 ms over 3 hour time window

 

The problem with 3.2 is that, unlike last time, we don’t have any more tricks to pull out of our hat. We’ve already upgraded to the latest and greatest Ruby 2.0. We’ve already disabled GC during requests (thanks Passenger!). When we made these upgrades, they sped up our Rails 3.0 app around 25%. That performance improvement has now been overshadowed by the 40% slower controller and view rendering we endure in Rails 3.2, making us slower than we were in 3.0 before our Ruby optimizations.

Suffice it to say, if you have a big app on Rails, you have probably learned at this point to fear new versions of Rails. I fully empathize with those who are forking over bucks for Rails LTS. If we didn’t need compatibility with new gems, staying on 2.3 would have left us about 100% faster than Rails 3.0, which in turn is about 40% faster than Rails 3.2.

New Rails trumpets improvements like “ability to build single-page web apps” and “tighter security defaults” and “streamlining, simplifying” the constituent libraries. The closest we’ve seen to a performance improvement lately was that 3.2 made loading in development faster (1). This was certainly a fabulous improvement (took our average dev page load from 5+ seconds to 1-2), albeit one we already had in Rails 3.0 thanks to active_reload.

My sense is that performance has become the least of the concerns driving Rails development these days, which, if true, is a shame. If Rails put equal time into analyzing/improving performance as it does to “streamlining, simplifying,” it’s hard to believe that we would keep swallowing 40%-100% performance setbacks with each release. Maybe a partnership with New Relic could help the Rails team to see the real world impact of their decisions on the actual apps being built with their platform? If others’ experience is similar to ours, that would be a lot of pain felt by a lot of people.

I admit I’m a bit reluctant to make this post, because Rails has given so much to us as a platform, and our business is too small at this point to be directly involved in improving performance within Rails. We will, however, continue to post any salient optimizations that we discover to this blog and elsewhere.

My primary concern though, and the reason I am posting this, is that if Rails keeps slowing down at the rate it has, it makes me wonder if there will be a “point of no return” in the 4.x or 5.x series where it simply becomes too slow for us to be able to upgrade anymore. Each new release we’ve followed has been another step toward that possibility, even as we buy ever-faster servers and implement ever-more elaborate optimizations to the compiler.

Has anyone else out there upgraded a medium-to-large webapp from Rails 2 -> 3 -> 4? I’d be very curious to hear your experience? The lack of results when Googling for “Rails performance” has always left me wanting for more details on other developers upgrade experiences.

(1) New caching models may improve performance as well in some scenarios, as could the dynamic streaming when used with compatible web servers. For the purposes of this post I’m focusing on “performance” as it pertains to dynamic web apps that run on a server, which means stuff like interpreting requests, interacting with the database, and rendering responses.

11/1/13

Copy Ubuntu (Xubuntu) to new partition with minimum pain

Every time I upgrade computers or hard drives I have to re-figure out how to get my Ubuntu and Xubuntu OS reinstalled in the minimum number of steps. The following is the sequence I used to get the job done here in 2013 on Raring Ringtail.

  1. Create a Live Ubuntu Startup Disc. This has consistently been more painful than I think it ought to be. Last time I tried this, usb-creator-gtk wouldn’t recognize my USB drive. This time, usb-creator-gtk would crash with various Segfaults before it completed copying my iso to my flash drive. Eventually I discovered UNetbootin and all was well. It can grab the OS installs for you, or you can give it the path to an ISO you want to burn. The only trick with it for me (perhaps because of my balky flash drive) was that I had to plug and unplug the flash drive from my computer a few times before UNetbootin (or Xubuntu) would recognize it. As far as OS goes, I put the most recent Ubuntu LTS on my flash drive since I figured it would have the best toolset for modifying partitions.
  2. Boot from the Live Startup Disc. In the menu, pick “Try Ubuntu.” Click the Ubuntu icon (upper left) and search for “Gparted.” Fire it up. If you’re lucky, your new drive is larger than the partition you want to copy. If not, you’ll have to resize the partition you’re copying such that it can fit onto the new disk.
  3. Copy the partition. Since you live booted, neither partition should be mounted, so you should be able to click “Copy” on the old partition and “Paste” in on the new drive (if it has an existing partition, you’ll need to delete that first). Apply change. Wait an hour.
  4. Run some esoteric crap to change UUID of new partition. Start terminal then
    sudo blkid # shows the list of all your drives, observe that the new and old drive have same UUID, that won't do!
    sudo tune2fs -U random /dev/sdXX # XX is the letter+number of the new partition you copied to. This will assign it a new UUID
  5. Open the file manager. The top left choices should be your various mountable drives. You should recognize the UID for one of them as the UID you randomly created in the last step. Click on that drive to mount it (UUID of drive may not show until you click it, that’s fine. Just keep clicking until you find the drive with the new UUID).
  6. Run some more esoteric crap to update your grub and fstab config. Grub is the bootloader. It lives in /boot/grub/grub.cfg on the drive you just mounted (should be /media/UUID/boot/grub/grub.cfg). Do a find-and-replace of all the old UUIDs with your new UUID. Also change the line “menuentry Ubuntu” to “menuentry UBUNTU” so you can be sure that you’re booting into the right grub after step 7. Save that file. Then open /media/UUID/etc/fstab and update the UUID there as well. More detailed (longer-winded) version of these instructions can be found in step 5 here.
  7. Ensure drive is bootable. Still in the Live Ubuntu Trial, go to System -> Administration -> Disk Utility. Pick your new disk and unmount it. Then click “edit partition” and choose the “Bootable” checkbox.
  8. (Optional) Update your MBR. If your new partition is on a new drive, you can just setup your BIOS to try to boot off the new drive first and you should be GTG. Otherwise, you can follow the instructions in Step 6 of the aforementioned link to update your MBR. If you don’t see the upper-cased UBUNTU after you point to the new drive, that’s probably a sign you need to update MBR.

After that, reboot and you should be GTG. Seems like a lot of steps for something that ought to be simple, but the tricky bit is to get your Grub/bootup stuff able to disambiguate between two drives that look identical on a byte-for-byte basis.

05/11/13

Ruby slice to end of an array

It’s popular enough to be a Google-suggested search, but not popular enough to have a good result yet.

If you want to slice to the end of a Ruby array, and/or get the end of a Ruby array, what you want is

arr[1, -1] # -1 means all the rest of the array
04/28/13

Install Balsamiq (and Air) on Ubuntu/Linux Pangolin 64 bit

In my continued effort to ween myself off VMware (slow start speed, many gigs to copy every time I get a new computer) I decided to invest some time this afternoon toward getting one of my essential tools, Balsamiq Mockups, to work on 64-bit Precise Pangolin. I had assumed this would be impossible, so was stunned to find that it’s not only possible, but pretty darned easy. Here are the steps that I cobbled together from a few sources to get it working fast:

Install Adobe Air

wget http://airdownload.adobe.com/air/lin/download/latest/AdobeAIRInstaller.bin
chmod +x AdobeAIRInstaller.bin
sudo ln -s /usr/lib/x86_64-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0
sudo ln -s /usr/lib/x86_64-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0
sudo ./AdobeAIRInstaller.bin

The middle bit makes some symlinks so that Adobe Air can access gnome keyring (required for install). If you’re running 32-bit Pangolin, those two steps are slightly different:

sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0
sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0

Install Balsamiq

wget http://builds.balsamiq.com/b/mockups-desktop/MockupsForDesktop64bit.deb
sudo dpkg -i MockupsForDesktop64bit.deb

Or if you’re running 32-bit:

wget http://builds.balsamiq.com/b/mockups-desktop/MockupsForDesktop32bit.deb
sudo dpkg -i MockupsForDesktop32bit.deb

Yay For Easy

Thanks to Linux.org for the guidance on installing Adobe Air. The steps on Adobe’s own site are an awful mess.

02/21/13

Simplest AJAX upload with Rails Carrierwave and jQuery

The time has finally come for a follow-up to my post from a couple years ago on using jQuery, attachment_fu, and Rails 2.3 to upload an asset to my blog. I wanted to share the updated version of my attempt to determine the absolute minimal code necessary to implement AJAX uploads on Rails 3 with Carrierwave.  As was the case a few years ago, the Google results still tend to suck when searching for a simple means to accomplish an AJAX upload with Rails — the most popular result I could find this evening was a Stackoverflow post that detailed 9 (ick) steps, including adding a gem to the project and creating new middleware.  No thanks!

The Javascript from my previous example is essentially unchanged.  It uses jQuery and the jQuery-form plugin. The main challenge in getting a AJAX uploading working is that form_for :remote doesn’t understand multipart form submission, so it’s not going to send the file data Rails seeks back with the AJAX request. That’s where the jQuery form plugin comes into play. Following is the Rails code that goes in your html.erb. Remember that in my case I am creating an image that will be associated with a model “BlogPost” that resides in the BlogPostsController. Adapt for your models/controllers accordingly:

<%= form_for(:image_form, :url => {:controller => :blog_posts, :action => :create_asset}, :remote => true, :html => {:method => :post, :id => 'upload_form', :multipart => true}) do |f| %>
 Upload a file: <%= f.file_field :uploaded_data %>
<% end %>

Here’s the associated Javascript:

$('#upload_form input').change(function(){
 $(this).closest('form').ajaxSubmit({
  beforeSubmit: function(a,f,o) {
   o.dataType = 'json';
  },
  complete: function(XMLHttpRequest, textStatus) {
   // XMLHttpRequest.responseText will contain the URL of the uploaded image.
   // Put it in an image element you create, or do with it what you will.
   // For example, if you have an image elemtn with id "my_image", then
   //  $('#my_image').attr('src', XMLHttpRequest.responseText);
   // Will set that image tag to display the uploaded image.
  },
 });
});

Now, chances are you’re uploading this asset from a #new action, which means that the resource (here, the BlogPost) that will be associated with the image has yet to be created. That means we’re going to need a model that we can stick the AJAX-created image in until such time that the main resource has been created. We can do this if we create a migration for a new BlogImage model like so:

def self.up
  create_table :blog_images do |t|
    t.string :image
  end
  add_column :blog_posts, :blog_image_id, :integer # once created, we'll want to reference the BlogImage we created beforehand via AJAX
end

The corresponding BlogImage model would then be:

class BlogImage < ActiveRecord::Base
  mount_uploader :image, BlogImageUploader
end

Of course, if your resource already exists at the time the AJAX upload will happen, then you’re on easy street. In that case, you don’t have to create a separate model like BlogImage, you can just add a column to your main resource (BlogPost) and mount the uploader directly to BlogPost. In either case, the BlogImageUploader class would be setup with whatever options you want, per the Carrierwave documentation.

Continuing under the assumption that you will separate your model from the main resource (in this case, the BlogImage, which is separate from the BlogPost), we can create this image before the BlogPost exists, and stash the BlogPost id however you see fit. Thus, your controller’s #create_asset method will look like:

def create_asset
  blog_image = BlogImage.new
  blog_image.image = params[:image_form][:uploaded_data]
  blog_image.save!
 
  # TODO: store blog_image.id in session OR pass ID back to form for storage in a hidden field
  # OR if your main resource already exists, mount the uploader to it directly and go sip on a 
  # pina colada instead of worrying about this
 
  render :text => blog_image.image.url
end

And that’s it. No new gems, plus low-fat Javascript and controller additions.

Bonus section: How to embed this AJAX upload form in the form for its parent resource

One of the more common questions from my last post was how to display this AJAX image upload amongst the form for another resource. There are many ways to accomplish this (see comments from last post if you’ve got time to kill), but in keeping with the spirit of simplicity in this post, one fast hack I’ve used:

  1. After all the form fields for the main resource, close the form without a submit button
  2. Insert the AJAX form
  3. Add a link below the AJAX form that is styled to look like a button. Have this link call Javascript to submit your main form

Not going to win any beauty contests, but easy to setup and gets the job done.

11/27/12

One approach to fixing Mysql Alter Table hang

Many of the Google results you’ll get for searching along the lines of “mysql alter table hang” or “mysql alter table frozen” or “mysql alter table stuck” will correctly point out that it often takes a long time for an alter table to finish.  They will also point out that killing an alter table is not an instant operation, it takes time for the kill to delete the temporary table.  But none of these morsels of advice covers the situation we recently found ourselves in, so I will share some details.

We had a write-intensive database table with small row size and a moderately big size (~700mb).  After waiting about 30 minutes for the alter table to complete, I finally got fed up and attempted to kill the alter table’s thread.  The thread did indeed change to a “killed” state rather than a query, but that killed state was still there 30..60..90 minutes later (all the while claiming that it was “renaming results table”).  Moreover, about 1800 threads that had become stuck waiting on the alter were sitting around on our Mysql process list.

I tried various remedies, from looking for deadlocks in “SHOW INNODB STATUS\G” (no deadlocks, just 1800 patiently waiting threads), to looking for the temp table that had supposedly been getting built to see if it was progressing (it wasn’t).  About three hours into the event, I decided we might as well kill the 1800 stuck processes in case it would take a DB restart to finally clear this alter table.  I followed these instructions on how to kill Mysql threads en mass, and got us down to just the alter thread along with a couple waiting threads that hadn’t been in my list of threads to kill.

Then, about two minutes later, something mysterious happened.  The alter command finished.

All of the Googling and Mysql info delving I could manage would not reveal what it was that had left the ALTER command stuck so much longer than it typically takes for a table of its size.  But whatever was keeping the alter command from finishing seems clearly to have been related to the 1800 threads that were sitting around waiting on the alter to complete.  Once we purged these, the alter finished within a couple minutes.

Hopefully this is a helpful alternate idea to try if you should find yourself with an ALTER TABLE that refuses to die no matter what steps you take.

11/27/12

What happens if two Google Adwords auto targets have the same bid?

The title of this blog came up recently as we continue to optimize our Google Shopping Adwords bidding tool, and I wanted to share my learnings with future web searchers.    Our situation is that we use Google Shopping PLAs to drive traffic to Bonanza, and our bid amounts are specified via Adwords labels that we apply to each item.  Most items will have multiple Adwords labels, where each Adwords label corresponds to an Adwords auto target.  My question to Google was:  when there is one product that has multiple Adwords labels, and those labels correspond to auto targets that have the same bid, which auto target gets credit for the impression (and subsequent click/conversion)?

The answer, straight from Google, makes a lot of sense:

So the one that enters the auction will be attributed with the clicks and impressions and this is dependent on the performance history associated with that auto target. The one with the stronger performance history – clicks and CTR attributed to it, will enter the auction and hence get the impressions.

Additionally if they are from different ad groups – the past ad group performance history and ad group level CTR would also matter.

Thus the answer:  whichever auto target performs best has the best chance of being shown in Google’s Adwords “auction” (the name they give to the process of choosing which Adword or GS products to show).

10/22/12

Early Signs of a Coder

What is the single question that is most predictive of dev ability?   The best coders come from a very disparate set of backgrounds, so it’s hard to group them on a single criteria.

However, one characteristic that I have seen be consistently predictive:  starting at a young age.  Gates, Woz, and even Jordan did it.

I am not in position to say whether I make for a good, great, or gruesome coder (1).  But I know that my history with computers is epic:

  • 1993: Age 13. Get our first computer, a Packard Bell 486SX/33 with 4MB of RAM.  It came with a San Diego Zoo App, an encyclopedia, and a really lame racing app of some sort (not to mention all the Win 3.1 niceties).  (2)
  • 1993: Learn how to use DOS, configure games to use extended memory, figure out how windows system config files work
  • 1993: Discover QBasic.  Disappointed to learn that it doesn’t compile to exes.  Decide to write a choose-your-own-adventure game with it anyway, complete with ASCII graphics.
  • 1994: Start a BBS (search for “Bill Harding” in the list), get the most calls in Poulsbo/Bremerton/Silverdale area (not saying much)
  • 1994: Enroll in the first of two computer programming courses at local community college (4.0).
  • 1995: Wrote VBBS scripts to emulate about five most popular BBS flavors (PCBoard, OBV/2, Renegade, etc).  To my knowledge, no one else accomplished this.  It was like carving a pumpkin with a butter knife.
  • 1995: Finish the community college computer programming courses with 3.9 average
  • 1996: Discover I prefer girls to computers

From 1996 until late-college were the dark ages of Bill programming, wherein I spent the better part of my teens having forgotten about that whole computer thing. (3)

But the reasons coding was so magnetic to me are the reasons it still is:   a chance for me to dream up whatever I could think of, and then make it become real to feel the rush of creation.

It’s like what businesses are supposed to be:  you decide what’s important to you, and then go make that vision into reality.  But in business, there is constant compromise and pragmaticism that ultimately rules the day, unless you’re Steve Jobs.  For a competent coder, they can at any time create the best program (of a certain type) across all human history.  It could be used by thousands or millions of people.  If it’s important enough to spend years on.

Even in the more pragmatic here and now, I know of few other jobs with a comparable opportunity to build something that matters with one’s own two hands.  This is why I latched onto programming so tightly & immediately at age 13.

Most of these characteristics are present by early teens:  the desire to build things, the capacity to solve problems, the joy of seeing a project completed… I think of them as “born qualities;” thus, my hunch is that if the teenager has access to a computer, they’ll probably figure out pretty quickly if they’re a programmer or not.

Update:  interviewed about 50 UW CS students today. The earliest coder of the bunch started in his late teens. The vast majority started when they entered the program (??!!).  So perhaps this theory is bunk, or these kids all suck.

(1) I suppose that having built Bonz is strong circumstantial evidence that I’m not gruesome

(2) Note how I remember the software that came with my first computer in as much detail as my first kiss

(3) Not entirely true; I did still fix old people’s computers at exorbitant prices (consistent with what other computer fixers charged). But mostly the cute girls beat the computers in this round.

10/17/12

Rails Exception Handling and Notification with Errbit

Bonanza has travelled a long road when it comes to trying out exception handling solutions. In the dark & early days we went simple with the Exception Notification plugin. The drawbacks of it were many, starting with the spam that it would spew forth when our site went into an error state and we’d end up with thousands of emails. There was also no tracking of exceptions, which made it very difficult to get a sense for which exception was happening how often.

Eventually we moved to HopToad (now Airbrake). It was better, but lacked key functionality like being able to close exceptions en masse or leave comments on an exception.

From there we moved to Exceptional, which we ended up using for the past year.  It was alright, when it worked.  The problem was, for us, it frequently didn’t work.  Most recently, we spent the last week having received two exceptions reported by Exceptional, when New Relic clearly showed that hundreds of exceptions had happened over that time period.  Also damning was the presentation of backtraces, which were hard to read (when present), as well as an error index page that made it difficult to discern what the errors were until they were clicked through.

Enter Errbit.  Jordan found this yesterday as we evaluated what to do about the lack of exceptions we were receiving from Exceptional.  Within a couple hours, he had gotten Errbit setup for us, and suddenly we were treated to hundreds of new exceptions that Exceptional had silently swallowed from our app over the past year.

But it’s not just that Errbit does what it is supposed to — it’s the bells and whistles it does it with.

Specifically, a handful of the features that make Errbit such a great solution:

  • Can set it up to email at intervals (e.g., 1st exception, 100th exception) so you hear about an exception when it first happens, and get reminded about it again later if it continues to be a repeat offender
  • Allows exceptions to be merged (or batch merged) when similar
  • Allows comments by developers on exceptions, and shows those comments from the main index page so you can quickly see if an exception is being worked on without needing to click through to it
  • Easy to read backtrace, plus automagic tie-in to Github, where you can actually click on the backtrace and see the offending code from within Github (holy jeez!)
  • Liberal use of spacing and HTML/CSS to make it much easier to read session, backtrace, etc relative to Exceptional and other solutions we’ve used
  • Open source, so you can add whatever functionality you desire rather than waiting for a third party to get around to it (a fact we’ve already made use of repeatedly in our first two days)
  • Open source, so the price is right (free)

Simply put, if you’re running a medium-to-large Rails app and you’re not using Errbit, you’re probably using the wrong solution.  Detailed installation instructions exist on the project’s Github home.