Job Latency - Ruby (Rack)
Integrating any Rack application with HireFire Job Latency is quick and easy when using the
Add it to your Gemfile:
This library provides middleware which you can add to your stack, exposing an HTTP GET endpoint that HireFire will periodically request in order to get the latest information on your job queue(s).
In addition to that, it also ships with a bunch of convenient macros for common worker libraries in the Ruby ecosystem, including:
- (more to come)
The library is open source, you can view the source here.
First, add the HireFire middleware to your Rack config file, usually called
config.ru , before the
use HireFire::Middleware # run MyApp
For this example we'll use the popular Sidekiq library.
Assuming that you have the following worker entry in your Procfile:
worker: bundle exec sidekiq
You'll need to setup a corresponding worker resource for HireFire. Create an new Ruby file that'll be loaded in at runtime and add the following code:
HireFire::Resource.configure do |config| config.dyno(:worker) do HireFire::Macro::Sidekiq.latency end end
The Sidekiq macro provided by
hirefire-resource simply returns the latency of your queue. When HireFire requests the HTTP GET endpoint at the following URL:
This library will transform your resource configuration to a JSON response in the following format:
HireFire will then read this information in order to determine the current latency of your worker dynos. Using this, in combination with your autoscaling configuration in the HireFire user interface, HireFire will determine how and when to scale your worker dynos.
HIREFIRE_TOKEN mentioned in the above code snippet can be found in the HireFire UI when creating/updating a dyno manager. It's just an environment variable that you'll need to add to your Heroku application later on.
What if you have multiple queues? This is quite common, and with HireFire you can deal with each queue separately by scaling multiple Procfile entries individually.
For example, let's say your Procfile not only contains a worker entry, but also an urgent_worker entry:
worker: bundle exec sidekiq -q default urgent_worker: bundle exec sidekiq -q urgent
All that you have to do to make this work is configure the following resources in your initializer:
HireFire::Resource.configure do |config| config.dyno(:worker) do HireFire::Macro::Sidekiq.latency end config.dyno(:urgent_worker) do HireFire::Macro::Sidekiq.latency(:urgent) end end
With this in place the library will now transform your resources in a JSON response with the following format:
Confirm that it works
With all this in place, start the local development server and access the following url:
You should now see a JSON response containing the latency of your queues. If you do then you can deploy to Heroku.
Now that you've integrated
hirefire-resource into your application and deployed it to Heroku, log in to HireFire and create two managers named
urgent_worker and configure them based on your auto-scaling requirements.
Don't forget to add the previously mentioned
HIREFIRE_TOKEN environment variable to your Heroku application.