Asynchronous Processing in Web Applications, Part 1: A Database Is Not a Queue

When hacking on web applications, you will inevitably find certain actions that are taking too long and as a result must be pulled out of the http request / response cycle. In other cases, applications will need an easy way to reliably communicate with other services in your system architecture.

The specific reasons will vary; perhaps the website has a real-time element, there’s a live chat feature, we need to resize and process images, we need to slice up and transcode video, do analysis of our logs, or perhaps just send emails at a high volume. In all the cases though, asynchronous processing becomes important to your operations.

Fortunately, there are a variety of libraries across all platforms intended to provide asynchronous processing. In this series, I want to explore this landscape, understand the solutions available, how they compare and more importantly how to pick the right one for your needs.

Asynchronous Processing

Let’s start by understanding asynchronous processing a bit better. Web applications undoubtedly have a great deal of code that executes as part of the HTTP request/response cycle. This is suitable for faster tasks that can be done within hundreds of milliseconds or less. However, any processing that would take more then a second or two will ultimately be far too slow for synchronous execution. In addition, there is often processing that needs to be scheduled in the future and/or processing that needs to affect an external service.

Synchronous HTTP Illustration

In these cases when we have a task that needs to execute but that is not a candidate for synchronous processing, the best course of action is to move the execution outside the request/response cycle. Specifically, we can have the synchronous web app simply notify another separate program that certain processing needs to be done at a later time.

Asynchronous Processing

Now, instead of the task running as a part of the actual web response, the processing runs separately so that the web application can respond quickly to the request. In order to achieve asynchronous processing, we need a way to allow multiple separate processes to pass information to one another. One naive approach might be to persist these notifications into a traditional database and then retrieve them for processing using another service.

Why not a database?

There are many good reasons why a traditional database is not well-suited for cases of asynchronous processing. Often you might be tempted to use a database because ther can be an understandable reluctance to introduce new technologies into a web stack. This leads people to try to use their RDBMS as a way to do background processing or service communication. While this can often appear to ‘get the job done’, there are a number of limitations and concerns that should not be overlooked.

Asynchronous Processing Model

There are two aspects to any asynchronous processing: the service(s) that creates processing tasks and the service(s) that consume and process these tasks accordingly. In the case of using a database as a method for this, typically there would be a database table which has records that represent notified tasks and then a flag to represent which state the task is in and whether the task is completed.

Polling Can Hurt

The first issue revolves around how to best consume and process these tasks stored within a table. With a traditional database this typically means a service that is constantly querying for new processing tasks or messages. The service may have a database poll interval of every second or every minute, but the service is constantly asking the database if there has been an update. Polling the database for processing has several downsides. You might have a short interval for polling and be hammering your database with constant queries. Alternatively, you could perhaps set a long interval in which case there will be many unnecessary processing delays. In the event of having multiple processing steps, the fastest route through your system has now been delayed to the sum of all the different polling intervals.

DB Polling

Polling requires very fast and frequent queries to the table to be most effective, which adds a significant load to the database even at a medium volume. Even worse, a given database table is typically prepared to be fast at adding data, updating data or querying data, but almost never all three on the same table. If you are constantly inserting, updating and querying, race conditions and deadlocks become inevitable. These ‘locks’ occur because multiple consumers will all be “fighting” with each other over the same table and with the ‘producers’ constantly adding new items. You will see load increase for the database, performance decrease and pile-ups become increasingly common as the volume scales up.

Manual Cleanup

In addition to that, clearing old jobs can be troublesome as well because at even a medium volume, there will be many tasks in the database table. At certain intervals, the old completed tasks need to be removed otherwise the table will grow quite large. Unfortunately, this has to be performed manually and deletes are even often not particularly efficient for tables especially when being removed so frequently in conjunction with updates and queries all happening at the same time.

Scaling Won’t Be Easy

In the end, while a database will appear to work at first as a simple way to send background instructions to be processed, this approach will come back to bite you. The many limitations such as constant polling, manual cleanups, deadlocks, race conditions and manual handling of failed processing should make it fairly clear that a database table is not a scalable strategy for this type of processing.


The takeaway here is that asynchronous processing is useful whether you need to send an sms, validate emails, approve posts, process orders, or just pass information between services. This type of background processing is simply not a task that a traditional RDBMS is best-suited to solve. While there are exceptions to this rule with PostgreSQL and its excellent notify support, there is a different class of software that was built from scratch for asynchronous processing use cases.

This type of software is called a message queue and you should strongly consider using this instead for a far more reliable, efficient and scalable way to perform asynchronous processing tasks. In the next part of this series, we will explore several different message queues in detail to understand how they work as well as how to contrast and compare the various options. Hope that this introduction was helpful and please let me know what you’d like to have covered in future parts of this series.

Continue to Part 2: Developers Need To Understand Message Queues

30 thoughts on “Asynchronous Processing in Web Applications, Part 1: A Database Is Not a Queue”

  1. This a nice intro to the subject.

    I find it interesting how a tech that I was using back in 1999, which was seen as boring and of no use to anyone outside of the banks is suddenly being embraced as one of the new hot things in web development :)

    Somehow when IBM and Microsoft do something it’s dull, but when Amazon offer it, it becomes new and sexy. Good to see the interest though, I’ve always thought async and message-queuing is far too underused.

    • Yeah I wanted to write this series largely because message queues are so well suited to all sorts of important tasks that are increasingly relevant in modern web apps. Hope people new to using message queues will find it as a helpful starting off point.

    • “Somehow when IBM and Microsoft do something it’s dull, but when Amazon offer it, it becomes new and sexy.” I think the difference is with non MS/IBM solutions it doesn’t cost $10,000 to spin up an instance of a simple dedicated message queue system. While their solutions are physically scalable, they are not financially scalable. The insane licensing fees are what keep people from implementing sensible solutions.

      • Good point :)

        These days there are so many good open source systems that there’s really no excuse for not using them.

        I briefly worked on what is now Websphere MQ (it was MQSeries at the time) and it was actually a really good piece of software, the distributed transaction management was pretty awesome.

  2. Databases should not be used for message storing for a lot of reasons, especially for short lived messages (databases just looove frequent updates, especially on indexes).

    But I have just the opposite case where it works better than a messaging queue:

    I store some jobs to be done in the future (from a couple of minutes to weeks in the future).
    I poll the DB once a minute and batch the execution of jobs.

    I’m not sure how many messaging solutions care for stuff like “execute this job 2 weeks from now”.

    • There are definitely different types of async processing. What you describe sounds more like the UNIX task scheduling system behind cron and ‘at’.

      As usual in software it’s horses-for-courses :)

      • Agreed, of course as with everything there is no silver bullet. Most popular general purpose message queues were not built with far future scheduling in mind. That said, there are a few message work queues I plan to talk about in this series that actually can support that requirement quite well.

  3. Exactly what we’ve been facing at work. mySQL proved far too unresponsive for real-time data processing and replication across multiple locations (we have a mix of in-house servers, data centre racks and cloud servers spanning across continents ). One day, we got introduced to message queue and its been one of best techonology that we integrated recently (next to memcached). It is truly meant for operations of the “fire and forget” nature such as logging and data replication. Case in point, all our logs are now sent as ‘messages’ to a ‘fanout’ exchange which propagates to ‘queues’ geographically different servers. On each queue server, there is a ‘consumer’ job in waiting to process each of the message. all of that happens in real time, with almost no overhead to the main operation where the logging started. What’s more, overall server load is low even with lots of messages flying around (every day in total, we pushed tens of millions of rows in our db) and they are guaranteed to arrive. Of course there are issues with connection timeout and stuck queues etc but MQ proved to be far more reliable to scale and maintain than mySQL+memcached combo for a specific subset of problems, which was logging in our case. The MQ implementation we used is rabbitMQ. Do give it or other MQ a try as I really think MQ will change your development process, for the better.

    • Thanks for sharing your experience, sounds like you guys managed to find a perfect example of the target use case for MQs. MQs can be a powerful tool for architecting scalable systems if you understand when and how to use them correctly. I think every developer should strive to understand them, when (and when not) to use them and how to integrate them as part of a larger high performance and reliable setup.

    • Great glad it could be helpful! The next post will really start to dig into the nitty gritty of popular message queue options, how they work, and how to compare and contrast them to find the right one based on the requirements of your project.

  4. We’ve just been talking about this subject in the office and something just occurred to me that you might want to talk about.

    Many people conflate message queues and job scheduling into one thing. This is understandable as that what most people use queuing to do, trigger jobs.

    However message queues are actually a level of abstraction above job scheduling.

    A scheduled job in something like DelayedJob (if you are a Rails user) is an instruction to carry out a specific piece of work, e.g. “send a welcome email to this new user”. In fact in DJ you are explicitly specifying the function to call.

    In contrast, a message in a message queue may be simply a statement of fact, such as “There is a new user with the following details …” (suitably formatted of course). This message may be received by many different processes, or non at all, depending on the topology of the queues and the different processes involved (pub-sub, fan-out, point-to-point). These processes may go off and do all sorts of different things. There might be one process that emails a new user, while another goes and creates a home area, while yet another logs the creation.

    Importantly, the sender of the message doesn’t need to know anything about these other systems, it just needs to know where to send the message.

    My first introduction to queueing was when I worked at Lehman Brothers back in 1999 as an intern, we used queues to tie together all the different systems that might be involved in processing a trade. These varied from brand-new, to twenty year old legacy systems running on a mix of technologies. It all worked surprisingly well.

    • Absolutely right, job scheduling is really a very small piece of what queues can be useful for. This is in part because job processing tends to be just a 1 to 1 (producer -> consumer) while in MQ you can have all sorts of delivery strategies, consumption strategies, etc. That’s why I tried to keep the descriptions vague because we use message queues as a way for us to fan out message between services in our SOA as well.

  5. Very useful explanation of the pain and suffering you would have to face if you try to create your own messaging server out of your DB server.

    Definitely a no go area especially while there are number of excellent open source and commercial JMS and AMQP servers around. Look forward to your article about proper messaging solutions using these proper messaging servers.

    By the way I like your diagrams, very simple and elegant. What tool did you use to produce them?

  6. A database can be used to implement a message queue without the caveats that you describe, but not by using polling on a single table.
    For instance :

    1 table for incoming jobs, 1 table for terminated jobs, and triggers to wake up workers (for consuming incoming jobs and cleaning terminated jobs).

    In short : don’t blame the tool, blame the code.

  7. What is your opinion on Redis and other NoSQL solutions. I’ve seen people very much trying to ram a queue down Redis’ throat. I imagine it works for low volume environments (like 1/sec) but after that, it fails to scale.

    • Also interested in a response here.

      I am developing a node.js application in which it will have one consumer polling the redis DB, rather than all users/connections polling it.

      The number of consumers will be “statically” decided by the number of node.js application instances we spawn.

    • I am really glad you brought this up. This is something I will covering in greater detail in my next post. In the ruby ecosystem, Redis is used very frequently as a job queue to some success with resque but that doesn’t mean redis is a true replacement for a MQ by any stretch. And I think by trying to replace the need for a true MQ with Redis, depending on requirements you may be missing out on more then initially realized in terms of long term scalability, job throughput, message delivery control, robust error handling, etc.

  8. It’s a pity you don’t have a donate button! I’d most certainly donate to this outstanding blog! I guess for now i’ll settle for bookmarking and
    adding your RSS feed to my Google account. I look forward to brand new updates
    and will talk about this blog with my Facebook group. Chat soon!

  9. “… the fastest route through your system has now been delayed to the sum of all the different polling intervals.”

    Shouldn’t that read the *slowest* possible route through your system? That’s analagous to hitting every red light on the way to work right as they turn red.
    The fastest possible route would be no wait at all, like hitting every single green light on the way to work.

    I would say, the much more useful statistic is that the average time through the system would be the sum of half of each polling interval.

Comments are closed.