5 Easy and Effective Ways to Use Python Logging | by Dmitrii Eliuseev | Jun, 2023

Use Python Logging like a Pro

Image generated by author

I can bet that almost every Python developer sometimes uses “print” for debugging. There’s nothing wrong with that for prototyping, but for production, there are much more effective ways to handle the logs. In this article, I will show five practical reasons why Python “logging” is much more flexible and powerful and why you absolutely should use it if you have not started before.

Let’s get into it.


To make things more practical, let’s consider a toy example. I created a small application that calculates a linear regression for two Python lists:

import numpy as np
from sklearn.linear_model import LinearRegression
from typing import List, Optional

def do_regression(arr_x: List, arr_y: List) -> Optional[List]:
""" LinearRegression for X and Y lists """
x_in = np.array(arr_x).reshape(-1, 1)
y_in = np.array(arr_y).reshape(-1, 1)
print(f"X: {x_in}")
print(f"y: {y_in}")

reg = LinearRegression().fit(x_in, y_in)
out = reg.predict(x_in)
print(f"Out: {out}")
print(f"Score: {reg.score(x_in, arr_y)}")
print(f"Coef: {reg.coef_}")
return out.reshape(-1).tolist()
except ValueError as err:
print(f"ValueError: {err}")
return None

if __name__ == "__main__":
print("App started")
ret = do_regression([1,2,3,4], [5,6,7,8])
print(f"LinearRegression result: {ret}")

This code works, but can we do it better? We obviously can. Let’s see five advantages of using “logging” instead of “print” in this code.

1. Logging levels

Let’s change our code a bit:

import logging

def do_regression(arr_x: List, arr_y: List) -> Optional[List]:
"""LinearRegression for X and Y Python lists"""
x_in = np.array(arr_x).reshape(-1, 1)
y_in = np.array(arr_y).reshape(-1, 1)
logging.debug(f"X: {x_in}")

except ValueError as err:
logging.error(f"ValueError: {err}")
return None

if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format='%(message)s')

logging.info("App started")
ret = do_regression([1,2,3,4], [5,6,7,8])
logging.info(f"LinearRegression result: {ret}")

Here I replaced “print” calls with “logging” calls. We made a small change, but it makes the output much more flexible. Using the “level” parameter, we can now set different logging levels. For example, if we use “level=logging.DEBUG”, then all output will be visible. When we are sure that our code is ready for production, we can change the level to “logging.INFO”, and debugging messages will not be displayed anymore:

The “INFO” debug level is at the left and “DEBUG” at the right, Image by author

And what is important is that no code change is needed except the initialization of the logging itself!

By the way, all available constants can be found in the logging/__init__.py file:

ERROR = 40
INFO = 20
DEBUG = 10

As we can see, the “ERROR” level is the highest; by enabling the “ERROR” log level, we can suppress all other messages, and only errors will be displayed.

2. Formatting

As we can see from the last screenshot, it is easy to control the logging output. But we can do much more to improve that. We can also adjust the output by providing the “format” string. For example, I can specify formatting like this:

format='[%(asctime)s] %(filename)s:%(lineno)d: %(message)s')

Without any other code changes, I will be able to see timestamps, file names, and even the line numbers in the output:

Logging output, Image by author

There are about 20 different parameters available, which can be found in the “LogRecord attributes” paragraph of the manual.

3. Saving logs to a file

Python logging is a very flexible module, and its functionality can be easily expanded. Let’s say we want to save all our logs into a file for future analysis. To do this, we need to add only two lines of code:

format='[%(asctime)s] %(message)s',

As we can see, I added a new parameter “handlers”. A StreamHandler is displaying the log at the console, and the FileHandler, as we can guess from its name, saves the same output to the file.

This system is really flexible. Plenty of different “handler” objects are available in Python, and I encourage readers to check the manual on their own. And as we already know, logging works almost automatically; no further code changes are required.

4. Rotating log files

Saving logs into a file is a good option, but alas, the disk space is not unlimited. We can easily solve this problem by using the rotating log file:

from logging.handlers import TimedRotatingFileHandler


if __name__ == "__main__":
file_handler = TimedRotatingFileHandler(
format='[%(asctime)s] %(message)s',
handlers=[file_handler, logging.StreamHandler()])

All parameters are self-explanatory. A TimedRotatingFileHandler object will create a log file, which will be changed every midnight, and only the last three log files will be stored. The previous files will be automatically renamed to something like “debug.log.2023.03.03”, and after a 3-day interval, they will be deleted.

5. Sending logs via socket

Python’s logging is surprisingly flexible. If we don’t want to save logs into a local file, we can just add a socket handler, which will send logs to another service using a specific IP and port:

from logging.handlers import SocketHandler

logging.basicConfig(level=logging.DEBUG, format='[%(asctime)s] %(message)s',
handlers=[SocketHandler(host="", port=15001),

That’s it; no more code changes are required!

We can also create another application that will listen to the same port:

import socket
import logging
import pickle
import struct
from logging import LogRecord

port = 15001
stream_handler = logging.StreamHandler()

def create_socket() -> socket.socket:
"""Create the socket"""
sock = socket.socket(socket.AF_INET)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return sock

def read_socket_data(conn_in: socket.socket):
"""Read data from socket"""
while True:
data = conn_in.recv(4) # Data: 4 bytes length + body
if len(data) > 0:
body_len = struct.unpack(">L", data)[0]
data = conn_in.recv(body_len)
record: LogRecord = logging.makeLogRecord(pickle.loads(data))
logging.debug("Socket connection lost")

if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format='[%(asctime)s] %(message)s',

sock = create_socket()
sock.bind(("", port)) # Local connections only
sock.listen(1) # One client can be connected
logging.debug("Logs listening thread started")
while True:
conn, _ = sock.accept()
logging.debug("Socket connection established")
except socket.timeout:
logging.debug("Socket listening: no data")

The tricky part here is to use the emit method, which adds all remote data received by a socket to an active StreamHandler.

6. Bonus: Log filters

Finally, a small bonus for readers who were attentive enough to read until this part. It is also easy to add custom filters to logs. Let’s say we want to log only X and Y values into the file for future analysis. It is easy to create a new Filter class, which will save to log only strings containing “x:” or “y:” records:

from logging import LogRecord, Filter

class DataFilter(Filter):
"""Filter for logging messages"""

def filter(self, record: LogRecord) -> bool:
"""Save only filtered data"""
return "x:" in record.msg.lower() or "y:" in record.msg.lower()

Then we can easily add this filter to the file log. Our console output will stay intact, but the file will have only “x:” and “y:” values.

file_handler = logging.FileHandler("debug.log")

format='[%(asctime)s] %(message)s',
handlers=[file_handler, logging.StreamHandler()])


In this short article, we learned several easy ways to incorporate logs into the Python application. Logging in Python is a very flexible framework, and it is definitely worth spending some time investigating how it works.

Thanks for reading, and good luck with future experiments.

If you enjoyed this story, feel free to subscribe to Medium, and you will get notifications when my new articles will be published, as well as full access to thousands of stories from other authors.

Source link

Leave a Comment