Job Latency - Ruby (on Rails)
Integrating Ruby on Rails with HireFire Job Queue is quick and easy when using the hirefire-resource
RubyGem.
Add it to your Gemfile:
gem "hirefire-resource"
This library injects middleware into your stack, exposing an HTTP GET endpoint that HireFire will periodically request in order to get the latest information on your job queue(s).
In addition to that, it also ships with a bunch of convenient macros for common worker libraries in the Ruby ecosystem, including:
- Sidekiq
- (more to come)
The library is open source, you can view the source here.
Integration
For this example we'll use the popular Sidekiq library.
Assuming that you have the following worker entry in your Procfile:
worker: bundle exec sidekiq
You'll need to setup a corresponding worker resource for HireFire. Create an initializer at app_root/config/initializers/hirefire_resource.rb
and add the following code:
HireFire::Resource.configure do |config| config.dyno(:worker) do HireFire::Macro::Sidekiq.latency end end
The Sidekiq macro provided by hirefire-resource
simply returns the latency of your queue. When HireFire requests the HTTP GET endpoint at the following URL:
http://your-domain.com/hirefire/<HIREFIRE_TOKEN>/info
This library will transform your resource configuration to a JSON response in the following format:
[{"name":"worker", "value":250}]
HireFire will then read this information in order to determine the current latency of your worker dynos. Using this, in combination with your autoscaling configuration in the HireFire user interface, HireFire will determine how and when to scale your worker dynos.
The HIREFIRE_TOKEN
mentioned in the above code snippet can be found in the HireFire UI when creating/updating a dyno manager. It's just an environment variable that you'll need to add to your Heroku application later on.
Multiple Queues
What if you have multiple queues? This is quite common, and with HireFire you can deal with each queue separately by scaling multiple Procfile entries individually.
For example, let's say your Procfile not only contains a worker entry, but also an urgent_worker entry:
worker: bundle exec sidekiq -q default urgent_worker: bundle exec sidekiq -q urgent
All that you have to do to make this work is configure the following resources in your initializer:
HireFire::Resource.configure do |config| config.dyno(:worker) do HireFire::Macro::Sidekiq.latency end config.dyno(:urgent_worker) do HireFire::Macro::Sidekiq.latency(:urgent) end end
With this in place the library will now transform your resources in a JSON response with the following format:
[{"name":"worker", "value":250}, {"name":"urgent_worker", "value":30}]
Confirm that it works
With all this in place, start the local development server and access the following url:
http://localhost:3000/hirefire/development/info
You should now see a JSON response containing the latency values for your queues. If you do then you can deploy to Heroku.
HireFire UI
Now that you've integrated hirefire-resource
into your application and deployed it to Heroku, log in to HireFire and create two managers named worker
and urgent_worker
and configure them based on your autoscaling requirements.
Don't forget to add the previously mentioned HIREFIRE_TOKEN
environment variable to your Heroku application.