Why you should be using PostgreSQL with Rails

Rails has a problem. The standard way of dealing with scale in Rails is to scale horizontally: add more servers. This is commonly called "shared nothing", although it's really more "shared database" -- the usual pattern is that no state is kept in individual application servers.

That's not the case in a stateful application that uses the database directly. As soon as you have several servers running in parallel against the database, you are essentially running concurrently. There are many subtle problems in dealing with concurrency, but it turns out that there are some very nasty bugs that only show up with multiple servers: depending on the libraries you're using, your collections can get out of sync.

Xavier Shay has done an excellent job of documenting this, so I'll wait while you go read the following links:

You have some choices when you have concurrency, and you need to maintain state immediately. You can use a central lock server. Or you can use the locking that comes into the database in the form of transactions or optimistic locking. And, in fact, the "quick fixes" for using acts as list or acts as state machine involve wrapping it in a SERIALIZABLE transaction.

So. Why don't Rails developers use transactions? More broadly, why the aversion to databases in general? Why do Rails developers tend to treat the database as a raw, dumb datastore and shun even simple database constraints?

Frankly, I think the reason is MySQL.

From what I can tell, MySQL got where it was mostly by having better documentation than PostgreSQL when it really mattered, working on Windows with IIS, and better marketing. It came in by default by Perl and PHP websites, and never left. For a long time, MySQL didn't support transactions, let alone sub selects -- and since it was the first database that most developers had been introduced to, they didn't know it worked any other way.

Now MySQL has transactions. Sort of. Most of the time, they work. But sometimes, they deadlock.

To a developer, it can seem like deadlock can happen for pretty much any reason at all.

  • A single insert into an InnoDB table can cause a deadlock, because they use row level locks internally.
  • Foreign key references use row level locks internally.
  • You can also get silent deadlocks -- you'll just get lock wait timeouts when you have row level locking between an InnoDB table and a table level lock MyISAM table, and it won't trigger the InnoDB deadlock detection.

The documentation mentions that you should be prepared to reissue a transaction if it fails.

"Always be prepared to re-issue a transaction if it fails due to deadlock. Deadlocks are not dangerous. Just try again."</blockquote> </p>

This is not something you can do when you've submitted a credit card authorization to the payment gateway.

But MySQL doesn't stop there. It also locks the entire table when you add or remove a column or an index. It does this because, under the hood, it creates a new table with the extra column, and then copies all the data from one table to the other. I have seen this take hours. This gets even worse when you consider that the quick fix solution, creating auxiliary tables and joining between them, will only work up to the point where it gets you into serious trouble. Add this to the naive query optimizer that MySQL uses under the hood, and you're screwed. (Let's not even talk about the bugs.)

My theory is that Rails developers treat the database as a raw, dumb datastore with no data integrity because that is all MySQL is good for. Every time they tried to follow "best DBA practice" and do useful with MySQL, they got bit, and the more they started hating touching (or even thinking about) databases. Over time, MySQL has trained developers into avoiding the bits that don't work effectively. It's now to the point where it's controversial to use any database features at all.

TL;DR

Use PostgreSQL. It's a much better choice if you're in any kind of startup. Not only does it have better transaction support, it allows you to rollback DDL operations, so you can do complicated schema upgrades atomically. If you're doing a bunch of database migrations and refactorings, this is really useful. PostgreSQL does have some gotchas (notably, raw COUNT(*) can be slow), but you can always use counter_cache.

If you need to get hold of data really fast, use NoSQL, or plug memcached into CacheMoney. Then start working your way into replication, either using the built in PostgreSQL 9.0 or Tungsten.

Wrap all your controller logic in transaction blocks, or move your ActiveRecord logic into a service / manager class and set up transaction management around that. Either way, transactional behavior should be the default. Use Foreigner to integrate foreign key references into ActiveRecord migrations, and use simple constraints to enforce integrity. You should still use application level validation as you will get better error messages and have finer control, but things like "validates_uniqueness_of" are more trouble than they are worth.

Finally, have Xavier Shay give a demonstration at your company. I've never met the man, but his website is right on the money.

Comments