Job Latency - Any Programming Language

For the purpose of this article, we'll be using the Ruby programming language to demonstrate how to integrate the Job Latency metric source into your application from scratch. This guide applies to any programming language, not just Ruby.

HireFire will periodically perform an HTTP GET request against an endpoint in your application. This will happen at the following URL/PATH pattern:

http://your-domain.com/hirefire/<HIREFIRE_TOKEN>/info

The  HIREFIRE_TOKEN  is an environment variable that you must set, and can be found in the HireFire UI when creating/updating a dyno manager.

Inside your application, you'll want to create a new route with the following path:

/hirefire/<HIREFIRE_TOKEN>/info

In Ruby on Rails, you would add something like this:

token = ENV["HIREFIRE_TOKEN"] || "development" get "/hirefire/#{token}/info", to: "hirefire#info"

This will invoke the function  info  inside the class HireFire . With this in place, you'll want to create a JSON response that contains your queue size(s) which HireFire will use to autoscale your worker-type dynos.

The JSON format should be the following:

[{"name" : "worker", "value" : 32}]

So all that you have to do is query your particular worker library in any way you wish, and return an array of objects that contain the properties  name -> stringand value -> int , where name  should reflect the dyno name in your Procfile  (e.g. worker ) and the value  should reflect the latency of the queue that (in this case worker ) works on.

You can (and should) return multiple objects, one for each Procfile entry that you want HireFire to autoscale, for example:

[{"name" : "worker", "value" : 32},
 {"name" : "urgent", "value : 8}]

Going back to our Ruby on Rails example, here is what the (simplified)  info  function would look like:

class HireFireController < ApplicationController
   # ... snip ...

   def info
     render json: JSON.generate([
       {name: "worker", quantity: worker_latency},
       {name: "urgent_worker", quantity: urgent_worker_latency}
     ])
   end

   private

   def worker_latency
     # logic to measure worker's latency
   end

   def urgent_worker_latency
     # logic to measure urgent_worker's latency
   end
 end

Confirm that it works

With all this in place, start the local development server and access the following url:

http://localhost:3000/hirefire/development/info

You should now see a JSON response containing the latency of your queues. If that is the case then you can deploy to Heroku.

HireFire UI

Now that you've integrated HireFire into your application and deployed it to Heroku, log in to HireFire and create two managers named  worker  and urgent_worker  and configure them based on your autoscaling requirements. 

Don't forget to add the previously mentioned  HIREFIRE_TOKEN  environment variable to your Heroku application.

Still need help? Contact Us Contact Us