"Service (Response Time) is the amount of time (in milliseconds) spent proxying data between the web dyno and the client."

If you want to auto-scale based on Response Time percentiles or averages, then all that you have to do is create a logdrain and point it at our secure endpoint:

heroku drains:add https://logdrain.hirefire.io

Once created, take note of the drain token:

heroku drains | grep hirefire

The format of the token is d.00000000-0000-0000-0000-000000000000 . The metrics will become available one minute after adding the logdrain.

HireFire UI

Once the logdrain has been created and you've taken note of the drain token, log in to HireFire and add your Heroku application if you haven't already. In the application form you'll see a field labeled "Logplex Drain Token". Add the token there and save it.

Now proceed to create a manager, using web  as its name (must be called web , all lowercase, to reflect the Procfile entry web ), set the Type to  Web.Logplex.ResponseTime , configure the rest of the options to your liking and then save and enable the manager.

Once all of that's done, HireFire will be auto-scaling your web dynos.

Details

This strategy allows you to stream your application/platform logs to HireFire's logdrain. The logdrain parses metric data out of the received logs to build a temporary data stream of metrics that HireFire's primary system can utilize to auto-scale your dyno formation.

Logs are consumed, parsed and stored in the logdrain within roughly a second after Heroku emits them, providing HireFire with metrics accurate to the second when it requests them during the minutely manager checkups.

HireFire does not extract any information from your logs other than what is necessary for auto-scaling, which are the metrics emitted by Heroku's Router (service, connect), Application (queue), Runtime Metrics (load1m, load5m, load15m).

Did this answer your question?