Adventures in Scaling, Part 2: PostgreSQL

Several months ago, I wrote a post about REE Garbage Collection Tuning with the intent of kicking off a series dedicated to different approaches and methods applied at Miso in order to scale our service. This time around I wanted to focus on how to setup PostgreSQL on a dedicated server instance. In addition, I will cover how to tweak the configuration settings as a first-pass towards optimizing database performance and explain which parameters are the most important.

Why PostgreSQL?

Before I begin covering these topics, I want to briefly touch on why our application (and this tutorial) is centered on PostgreSQL and not one of the many other RDBMS or NoSQL alternatives available for persistence in current web development. From the multitude of “general purpose” database persistence options available, the most common choices for a startup in my experience tend to be MySQL, Drizzle, PostgreSQL and MongoDB.

Each of these options above have pros and cons and there is no one size fits all solution as is usually the case in technology. In the past, I have traditionally used MySQL to do the bulk of database persistence for Rails apps. This choice was largely because of familiarity as well as the clear MySQL favoritism in the early years of the Rails community. Though the full explanation of why is outside the scope of this post, suffice to say I won’t be choosing to use MySQL again in the future when starting a new project. My claim, unsubstantiated in this post, is that there is nothing significant MySQL provides over PostgreSQL and yet there are many pitfalls and downsides.

If you are interested in the subject, I recommend you read a few posts and draw your own conclusions. To be fair, Drizzle looks like an interesting alternative to MySQL and/or Postgres. Having never used that database, I would be curious to hear how it compares to PostgreSQL. We are big fans of MongoDB at Miso and we store several types of data for our services within collections. However, for historical and practical reasons, we did not want to dedicate the time to convert our primary dataset as the benefit at our current level was not significant enough to warrant the time involved. In a future post, I would love to delve deeper into our Polyglot Persistence strategy and why we opted to use particular technologies over alternatives.

Setting up PostgreSQL

With that explanation out of the way, let’s turn our attention to setting up PostgreSQL on a dedicated database server. In this tutorial, we will be installing Postgres 9.X on an Ubuntu machine. You may need to tweak this for your specific needs and platform depending on your specific setup.

One of the first questions you run across when setting up a dedicated PostgreSQL server is “How much RAM should the instance have?”. I would take a look at the total size of your database (or the expected size of your database in the near future). If at all possible, the ideal RAM for your instance would allow for the entire database to be placed in memory. At small scales, this should be possible and will mean major performance increases for obvious reasons. The size I recommend for a starting server is typically between 2-8GB of RAM. Furthermore, I would recommend against using a dedicated database with less than 1GB of RAM if you can help it. If you have a large dataset and need replication or sharding from the start, then I would recommend putting down this guide and buying the PostgreSQL High Performance book right now. For the purposes of the rest of this guide, I am going to assume an 8GB instance was selected.

Now that we picked the size of our instance, let’s actually start the install. Unfortunately for us, Ubuntu 10.04 apt repositories only have 8.4. The postgresql-9.0 package won’t be added until Ubuntu Natty. For this reason, unless you are using that version, you must use alternate repositories for this installation. There is an excellent utility called “python-software-properties” to make installing repositories easier. If you haven’t yet, install that first:

sudo apt-get install python-software-properties

Next, let’s add the repository containing PostgreSQL 9.0:

sudo add-apt-repository ppa:pitti/postgresql
sudo apt-get update

Now, we need to install the database, the contrib tools, and several supporting libraries:

sudo apt-get install postgresql-9.0 postgresql-contrib-9.0
sudo apt-get install postgresql-server-dev-9.0 libpq-dev libpq5

Installing all of these packages now will save you the pain later of trying to track down why certain things won’t work. Another good step is to symlink the archival cleanup tool to /usr/bin for use when you need to enable replication with WAL:

sudo ln -s /usr/lib/postgresql/9.0/bin/pg_archivecleanup /usr/bin/

At this time, you should also recreate the cluster to ensure proper UTF-8 encoding for all databases:

su postgres
pg_dropcluster --stop 9.0 main
pg_createcluster --start -e UTF-8 9.0 main
sudo /etc/init.d/postgresql restart

You can now check the status of the database using:

sudo /etc/init.d/postgresql status

Once this has all been done, you can find the configuration directory in /etc/postgresql/9.0/main and the data directory in /var/lib/postgresql/9.0/main.

Another useful tool is pg_top, which allows you to view the status of your PostgreSQL processes along with the currently executing queries:

sudo wget
tar -zxpvf pg_top-3.6.2.tar.gz
cd pg_top-3.6.2

PostgreSQL and all related utilities should now be properly installed. From here, you can begin creating databases and users with the psql command:

psql -d postgres

Of course, there are a lot of commands you can run with this console. You are encouraged to read other sources for more information.

PostgreSQL Tuning

Now that we have successfully installed PostgreSQL, the next thing to do is to tune the configuration parameters for solid performance. Of course, the best way to do this is to measure and profile your application and set these based on your own needs. All these settings will ultimately vary based on the needs of your particular application. Nonetheless, here is a guide intended to get you started with settings that work “well enough”. From here, these parameters can be tweaked to your hearts content to find the sweet spot for your individual use case.

Fortunately for us, PostgreSQL guru Gregory Smith has created a tool to make our lives a great deal simpler. This tool is called pgtune and I encourage you to run this on your server as quickly as possible after setup. The settings this recommends should be your baseline configuration values unless you know otherwise. First, let’s download the pg_tune tool:

cd ~
tar -zxvf pgtune-0.9.3.tar.gz

Once the utility has been extracted, simply execute the binary with the proper options and recommended configuration values will be output:

cd pgtune-0.9.3
./pgtune -i /etc/postgresql/9.0/main/postgresql.conf -o ~/postgresql.conf.pgtune --type Web

This will generate all the recommended values tailored custom to your server in a file ~/postgresql.conf.pgtune. Simply view this file and note the settings at the bottom:

cat ~/postgresql.conf.pgtune
# Look at the bottom for the relevant parameter values

Take the settings and append them to your actual configuration file located at /etc/postgresql/9.0/main/postgresql.conf. To be on the safe side, you should also update the kernel shmmax property which is the maximum size of shared memory segment in bytes. This is particularly necessary for large values of effective_cache_size and other parameters:

  sysctl -w kernel.shmmax=26843545600
  sudo nano /etc/sysctl.conf
    # Append the following line:
  sudo sysctl -p /etc/sysctl.conf

Once this has been updated and you have saved the modified settings, restart your cluster for the settings to take affect:

pg_ctlcluster 9.0 main reload

Now that we have setup these tuned parameters, let’s take a look deeper and delve into the most important parameters you can tweak to improve your database performance. The list below is not a comprehensive guide, but in most cases these parameters will serve as the first values to experiment with after using pg_tune.

  • work_mem – Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. The total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. The recommended range for this value is the total available memory / (2 * max_connections). On an 8GB system, this could be set to 40MB.
  • effective_cache_size – Sets the planner’s assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. The recommended range for this value is between 60%-80% of your total available RAM. On an 8GB system, this could be set to 5120MB.
  • shared_buffers – Sets the amount of memory the database server uses for shared memory buffers. The default is typically 32 megabytes and must be at least 128 kilobytes. The recommended range for this value is between 20%-30% of the total available RAM. On an 8GB system, this could be set to 2048MB.
  • wal_buffers – The amount of memory used in shared memory for WAL data. The default is 64 kilobytes (64kB). The setting need only be large enough to hold the amount of WAL data generated by one typical transaction. The recommended range for this value is between 2-16MB. On an 8GB system, this could be set to 12MB.
  • maintenance_work_mem – Specifies the maximum amount of memory to be used in maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. Larger settings may improve performance for vacuuming and for restoring database dumps. The recommended range for this value is 50MB per GB of the total available RAM. On an 8GB system, this could be set to 400MB.
  • max_connections – Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections and this parameter can only be set at server start. The recommended range for this value is between 100-200.

Beyond this for further reading, I would recommend checking out other articles, and of course Gregory Smith’s book PostgreSQL High Performance.

Wrapping Up

Hopefully you have found the guide above helpful. The intent was to provide an easy way for people to get started using PostgreSQL in their Rails applications today. If you are starting a new Rails application, I strongly encourage you to make your own informed decision on which persistence engine to use. Research the issue for yourself but I would urge you to at least consider giving PostgreSQL an honest try. Coming from using MySQL prior, I can assure you we did not miss a single thing about MySQL when we migrated.

This post is the first of many that will deal with PostgreSQL and provide details on how to get this excellent RDBMS to work for your application. In a future post, I would like to cover how to setup “hot standby” replication with archival logs (WAL). That guide will take you step by step through creating a master database and several read-only slaves as well as then utilizing those slaves from within your Rails application. We also ported from MySQL to PostgreSQL, so another future post will detail how to best migrate your database as well as pitfalls to avoid.