visit
With the complexity of modern software applications, one of the biggest challenges for developers is simply understanding how applications behave. Understanding the behavior of your app is key to maintaining its stability, performance, and security.
This is a big reason why we do application logging: to capture and record events through an application’s lifecycle so that we can gain valuable insights into our application. What kinds of insights? Application activity (user interactions, system events, and so on), errors and exceptions, resource usage, potential security threats, and more.
When developers can capture and analyze these logs effectively, this improves application stability and security, which, in turn, improves the user experience. It’s a win-win for everybody.
Application logging is easy—if you have the right tools. In this post, we’ll walk through using Heroku Logplex as a centralized logging solution. We’ll start by deploying a simple Python application to Heroku. Then, we’ll explore the different ways to use Logplex to view and filter our logs. Finally, we’ll show how to use Logplex to send your logs to an external service for further analysis.
Ready to dive in? Let’s start with a brief introduction to Heroku Logplex.
Heroku Logplex is a central hub that collects, aggregates, and routes log messages from various sources across your Heroku applications. Those sources include:
By consolidating logs in a single, central place, Logplex simplifies log management and analysis. You can find all your logs in one place for simplified monitoring and troubleshooting. You can perform powerful filtering and searching on your logs. And you can even route logs to different destinations for further processing and analysis.
At its heart, Heroku Logplex consists of three crucial components that work together to streamline application logging:
1. Log sources are the starting points where log messages originate within your Heroku environment. They are your dyno logs, Heroku logs, and custom sources, which we mentioned above.
2. Log drains are the designated destinations for your log messages. Logplex allows you to configure drains to route your logs to various endpoints for further processing. Popular options for log drains include:
External logging services with advanced log management features, dashboards, and alerting capabilities. Examples are Datadog, Papertrail, and Sumo Logic.
Notification systems that send alerts or notifications based on specific log entries, enabling real-time monitoring and troubleshooting.
Custom destinations such as your own Syslog or web server.
3. Log filters are powerful tools that act as checkpoints, allowing you to refine the log messages before they reach their final destinations. Logplex allows you to filter logs based on source, log level, and even message content.
By using filters, you can significantly reduce the volume of data sent to your drains, focusing only on the most relevant log entries for that specific destination.
As Logplex collects log messages from all your defined sources, it passes these messages through your configured filters, potentially discarding entries that don't match the criteria. Finally, filtered messages are routed to their designated log drains for further processing or storage.
Alright, enough talk. Show me how, already!
Let’s walk through how to use Logplex for a simple Python application. To get started, make sure you have a Heroku account. Then, .
You can find our very simple Python script (main.py) in the for this demo. Our script runs an endless integer counter, starting from zero. With each iteration, it emits a log message (cycling through log levels INFO, DEBUG, ERROR, and WARN). Whenever it detects a prime number, it emits an additional CRITICAL log event to let us know. We use to help us determine if a number is prime.
To run this Python application on your local machine, first clone the repository. Then, install the dependencies:
(venv) ~/project$ pip install -r requirements.txt
Next, start up the Python application. We use gunicorn to spin up a server that binds to a port while our prime number logging continues to run in the background. (We do this because a Heroku deployment is designed to bind to a port, so that’s how we’ve written our application even though we’re focused on logging).
(venv) ~/project$ gunicorn -w 1 --bind localhost:8000 main:app
[2024-03-25 23:18:59 -0700] [785441] [INFO] Starting gunicorn 21.2.0
[2024-03-25 23:18:59 -0700] [785441] [INFO] Listening at: //127.0.0.1:8000 (785441)
[2024-03-25 23:18:59 -0700] [785441] [INFO] Using worker: sync
[2024-03-25 23:18:59 -0700] [785443] [INFO] Booting worker with pid: 785443
{"timestamp": "2024-03-25T23:18:59.507828Z", "level": "INFO", "name": "root", "message": "New number", "Number": 0}
{"timestamp": "2024-03-25T23:19:00.509182Z", "level": "DEBUG", "name": "root", "message": "New number", "Number": 1}
{"timestamp": "2024-03-25T23:19:01.510634Z", "level": "ERROR", "name": "root", "message": "New number", "Number": 2}
{"timestamp": "2024-03-25T23:19:02.512100Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "Prime Number": 2}
{"timestamp": "2024-03-25T23:19:05.515133Z", "level": "WARNING", "name": "root", "message": "New number", "Number": 3}
{"timestamp": "2024-03-25T23:19:06.516567Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "Prime Number": 3}
{"timestamp": "2024-03-25T23:19:09.519082Z", "level": "INFO", "name": "root", "message": "New number", "Number": 4}
Simple enough. Now, let’s get ready to deploy it and work with logs.
We start by logging into Heroku through the CLI.
$ heroku login
Then, we create a new Heroku app. I’ve named my app logging-primes-in-python
, but you can name yours whatever you’d like.
$ heroku apps:create logging-primes-in-python
Creating ⬢ logging-primes-in-python... done
//logging-primes-in-python-6140bfd3c044.herokuapp.com/ | //git.heroku.com/logging-primes-in-python.git
Next, we for our GitHub repo with this Python application.
$ heroku git:remote -a logging-primes-in-python
set git remote heroku to //git.heroku.com/logging-primes-in-python.git
requirements.txt
and Procfile
We need to let Heroku know what dependencies our Python application needs and also how it should start up our application. To do this, our repository has two files: requirements.txt
and Procfile
.
The first file, requirements.txt
, looks like this:
python-json-logger==2.0.4
pytest==8.0.2
sympy==1.12
gunicorn==21.2.0
And Procfile
looks like this:
web: gunicorn -w 1 --bind 0.0.0.0:${PORT} main:app
That’s it. Our entire repository has these files:
$ tree
.
├── main.py
├── Procfile
└── requirements.txt
0 directories, 3 files
Now, we’re ready to deploy our code. We run this command:
$ git push heroku main
…
remote: Building source:
remote:
remote: -----> Building on the Heroku-22 stack
remote: -----> Determining which buildpack to use for this app
remote: -----> Python app detected
…
remote: -----> Installing requirements with pip
…
remote: -----> Launching...
remote: Released v3
remote: //logging-primes-in-python-6140bfd3c044.herokuapp.com/ deployed to Heroku
remote:
remote: Verifying deploy... done.
To verify that everything works as expected, we can dive into Logplex right away. Logplex is enabled by default for all Heroku applications.
$ heroku logs --tail -a logging-primes-in-python
…
2024-03-22T04:34:15.540260+00:00 heroku[web.1]: Starting process with command `gunicorn -w 1 --bind 0.0.0.0:${PORT} main:app`
…
2024-03-22T04:34:16.425619+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:16.425552Z", "level": "INFO", "name": "root", "message": "New number", "taskName": null, "Number": 0}
2024-03-22T04:34:17.425987+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:17.425837Z", "level": "DEBUG", "name": "root", "message": "New number", "taskName": null, "Number": 1}
2024-03-22T04:34:18.000000+00:00 app[api]: Build succeeded
2024-03-22T04:34:18.426354+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:18.426205Z", "level": "ERROR", "name": "root", "message": "New number", "taskName": null, "Number": 2}
2024-03-22T04:34:19.426700+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:19.426534Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "taskName": null, "Prime Number": 2}
We can see that logs are already being written. Heroku’s log format is following this scheme:
timestamp source[dyno]: message
app
. Meanwhile, all of Heroku’s system components (HTTP router, dyno manager) have the source heroku
.web.1
, and the Heroku HTTP router appears as router
.We’ve seen the first option for examining our logs, the Heroku CLI. You can use command line arguments, such as --source
and --dyno
, to use filters and specify which logs to view.
To specify the number of (most recent) log entries to view, do this:
$ heroku logs --num 10
To filter down logs to a specific dyno or source, do this:
$ heroku logs --dyno web.1
$ heroku logs --source app
Of course, you can combine these filters, too:
$ heroku logs --source app --dyno web.1
The Heroku Dashboard is another place where you can look at your logs. On your app page, click More -> View logs.
Here is what we see:
If you look closely, you’ll see different sources: heroku
and app
.
Let’s demonstrate how to use a log drain. For this, we’ll use (formerly Logtail). We created a free account. After logging in, we navigated to the Source page and clicked Connect source.
We enter a name for our source and select Heroku as the source platform. Then, we click Create source.
After creating our source, BetterStack provides the Heroku CLI command we would use to add a log drain for sending logs to BetterStack.
Technically, this command adds an that points to an HTTPS endpoint from BetterStack. We run the command in our terminal, and then we restart our application:
$ heroku drains:add \
"//in.logs.betterstack.com:6515/events?source_token=YKGWLN7****************" \
-a logging-primes-in-python
Successfully added drain //in.logs.betterstack.com:6515/events?source_token=YKGWLN7*****************
$ heroku restart -a logging-primes-in-python
Almost instantly, we begin to see our Heroku logs appear on the Live tail page at BetterStack.
By using a log drain to send our logs from Heroku Logplex to an external service, we can take advantage of the features from BetterStack to work with our Heroku logs. For example, we can create visualization charts and configure alerts on certain log events.
In our example above, we created a custom HTTPS log drain that happened to point to an endpoint from BetterStack. However, we can send our logs to any endpoint we want. We could even send our logs to another Heroku app! Imagine building a web service on Heroku that only Heroku Logplex can make POST requests to.
Before we conclude our walkthrough, let’s briefly touch on some logging best practices.
Heroku Logplex empowers developers and operations teams with a centralized and efficient solution for application logging within the Heroku environment. While our goal in this article was to provide a basic foundation for understanding Heroku Logplex, remember that the platform offers a vast array of advanced features to explore and customize your logging based on your specific needs.
As you dig deeper into Heroku’s documentation, you’ll come across advanced functionalities like:
Customizable log processing: Leverage plugins and filters to tailor log processing workflows for specific use cases.
Real-time alerting: Configure alerts based on log patterns or events to proactively address potential issues.
Advanced log analysis tools: Integrate with external log management services for comprehensive log analysis, visualization, and anomaly detection.
By understanding the core functionalities and exploring the potential of advanced features, you can leverage Heroku Logplex to create a robust and efficient logging strategy. Ultimately, good logging will go a long way in enhancing the reliability, performance, and security of your Heroku applications.