Job Queue - Ruby (Rack)

Integrating any Rack application with HireFire Job Queue is quick and easy when using the  hirefire-resource  RubyGem.

Add it to your Gemfile:

gem "hirefire-resource"

This library provides middleware which you can add to your stack, exposing an HTTP GET endpoint that HireFire will periodically request in order to get the latest information on your job queue(s).

In addition to that, it also ships with a bunch of convenient macros for common worker libraries in the Ruby ecosystem, including:

The library is open source, you can view the source here.


First, add the HireFire middleware to your Rack config file, usually called , before the run  call:

use HireFire::Middleware
# run MyApp

For this example we'll use the popular Sidekiq library.

Assuming that you have the following worker entry in your Procfile:

worker: bundle exec sidekiq

You'll need to setup a corresponding worker resource for HireFire. Create an new Ruby file that'll be loaded in at runtime and add the following code:

HireFire::Resource.configure do |config|
  config.dyno(:worker) do

The Sidekiq macro provided by  hirefire-resource  simply counts all the jobs in all of your queues and returns an integer representing that number of jobs. When HireFire requests the HTTP GET endpoint at the following URL:<HIREFIRE_TOKEN>/info

This library will transform your resource configuration to a JSON response in the following format:


HireFire will then read this information in order to determine the current queue size of your worker dynos. Using this, in combination with your auto-scaling configuration in the HireFire user interface, HireFire will determine how and when to scale your worker dynos.

The  HIREFIRE_TOKEN  mentioned in the above code snippet can be found in the HireFire UI when creating/updating a dyno manager. It's just an environment variable that you'll need to add to your Heroku application later on.

Multiple Queues

What if you have multiple queues? This is quite common, and with HireFire you can deal with each queue separately by scaling multiple Procfile entries individually.

For example, let's say your Procfile not only contains a worker entry, but also an urgent_worker entry:

worker: bundle exec sidekiq -q default
urgent_worker: bundle exec sidekiq -q urgent

All that you have to do to make this work is configure the following resources in your initializer:

HireFire::Resource.configure do |config|
  config.dyno(:worker) do

  config.dyno(:urgent_worker) do

With this in place the library will now transform your resources in a JSON response with the following format:


Confirm that it works

With all this in place, start the local development server and access the following url:


You should now see a JSON response containing your queue sizes. If you do, congratulations, you're done! Deploy it to Heroku.

HireFire UI

Now that you've integrated  hirefire-resource  into your application and deployed it to Heroku, log in to HireFire and create two managers named worker  and urgent_worker and configure them based on your auto-scaling requirements.

Don't forget to add the previously mentioned  HIREFIRE_TOKEN  environment variable to your Heroku application.

Still need help? Contact Us Contact Us