HireFire - Job Queue Latency (Any Language)
"Job Queue Latency is the amount of time since the oldest job in the queue was ready to be processed."
If you're not using one of our natively supported programming languages (Ruby, Python), this guide will demonstrate how to integrate the Job Queue Latency autoscaling strategy into your application. Although we will use Node.js and Express.js for demonstration purposes, the fundamental principles can be universally applied across various programming environments.
HireFire will periodically make an HTTP GET request to a specific endpoint within your application. This request is formatted to follow a particular URL/PATH pattern:
https://your-domain.com/hirefire/<HIREFIRE_TOKEN>/info
HIREFIRE_TOKEN
represents an environment variable that needs to be added to your Heroku environment variables. You can locate this token within the HireFire app when you're setting up or modifying a dyno manager that requires this variable.
To accommodate this in your application, you'll need to establish a new route adhering to the path:
/hirefire/<HIREFIRE_TOKEN>/info
For an application built using Node.js and Express.js, your route definition might look something like this:
const express = require("express") const app = express() const port = process.env.PORT || 3000 const token = process.env.HIREFIRE_TOKEN || "development" app.get(`/hirefire/${token}/info`, (req, res) => { res.json([ { name: "worker", value: yourJobQueueLatency() } ]) }) function yourJobQueueLatency() { // Implement the logic to measure your job queue's latency return 32.0; // Example job queue latency value } app.listen(port, () => { console.log(`Server running on port ${port}`) })
This setup directs the /hirefire/<HIREFIRE_TOKEN>/info
route to a handler function that responds with a JSON payload. This payload should contain your job queue latencies, which HireFire uses to autoscale your worker dynos.
Your JSON response should follow this structure:
[{"name": "worker", "value": 32.0}]
Here, name
is a string representing the name of the dyno as defined in your Procfile
, and value
is a float reflecting the latency in seconds of the queue being processed by worker
.
Return multiple objects if you have several Procfile
entries that you wish HireFire to autoscale, for instance:
[{"name": "worker", "value": 32.0}, {"name": "worker_critical", "value": 8.0}]
Verifying Functionality
With everything in place, initiate your local development server and navigate to:
http://localhost:3000/hirefire/development/info
You should be greeted with a JSON response showcasing the latency of your job queues. If all looks good, you're ready to proceed with deployment to your production environment.
After deploying your application, log into HireFire and create and configure two Dyno Managers with the names worker
and worker_critical
, select the HireFire - Job Queue Latency
strategy, and configure the autoscaling rules to your liking.
Once done, enable the Dyno Managers to autoscale your worker
and worker_critical
dynos.
If you have any questions or need assistance, don't hesitate to reach out to us at support@hirefire.io.