The Logging Module

Python’s most underrated library

Author

Karsten Naert

Published

November 15, 2025

Why Logging Matters

Picture this: your application crashes in production at 3 AM. Your boss is calling. Users are complaining. You frantically search through the code trying to figure out what went wrong. If only you had some way to see what the application was doing before it crashed…

This is where logging comes in. Good logging is essential for:

  • Debugging: Understanding what went wrong and why
  • Performance investigation: Finding bottlenecks and slow operations
  • Production monitoring: Keeping an eye on your application’s health
  • Audit trails: Tracking who did what and when

Python’s default logging module is incredibly flexible, which unfortunately means it’s also often misunderstood. Many Python developers prefer alternatives like loguru because they find logging too complex. But once you understand the core concepts, you’ll see that logging’s flexibility is actually its superpower.

Tip

Python’s logging system was inspired by Java’s log4j. It’s popularity declined after the Log4Shell vulnerability was discovered in 2021, highlighting the dangers with passing untrusted user code through a logging system.

Your First Logger

Let’s start simple. To use logging, you first need to get a logger:

import logging

mylogger = logging.getLogger('mylogger')
mylogger
<Logger mylogger (WARNING)>

You create a logger by calling logging.getLogger() with a name. If you use the same name twice, you get the same logger back—this is a variant on the Singleton pattern 1 in action:

logger1 = logging.getLogger('mylogger')
logger2 = logging.getLogger('mylogger')

logger1 is logger2  # Same object!
True

This is incredibly useful: different modules in your application can all access the same logger just by using the same name.

Logging Your First Message

A logger has five main methods corresponding to different severity levels:

mylogger.debug('Very detailed information for diagnosing problems')
mylogger.info('Confirmation that things are working as expected')
mylogger.warning('Something unexpected happened, but we can continue')
mylogger.error('A serious problem occurred')
mylogger.critical('The application may not be able to continue')

Wait, where’s the output? We only see the warning and above! This is the default behavior: Python’s logging module comes with a “last resort” handler that only shows warnings and more severe messages, and it sends them to stderr.

Each severity level has a numeric value—higher numbers mean more severe:

logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL
(10, 20, 30, 40, 50)

You can also use the generic log() method with a custom level:

mylogger.log(logging.ERROR, 'This is an error message')
mylogger.log(25, 'Custom level between INFO and WARNING')

But stick to the standard five methods—they’re clearer and more idiomatic.

Exercise

Create a logger with your own name and try logging messages at all five severity levels. Which ones appear by default?

The Logger Hierarchy

Loggers can have parent-child relationships using dot notation:

parent = logging.getLogger('myapp')
child = logging.getLogger('myapp.database')
grandchild = logging.getLogger('myapp.database.postgres')

child.parent is parent
grandchild.parent is child
True

Every logger has a parent. If you don’t specify one, the parent is the root logger:

mylogger.parent
<RootLogger root (WARNING)>

You can get the root logger by calling getLogger() without arguments:

root = logging.getLogger()
root
<RootLogger root (WARNING)>

This hierarchy will become important later when we discuss propagation—how log messages bubble up through the family tree.

Naming Convention

The Python documentation recommends naming loggers after the module they’re in:

logger = logging.getLogger(__name__)

This automatically creates a hierarchy that matches your package structure.

Handlers: Where Logs Go

Here’s the key insight about logging’s architecture: loggers decide whether to process a message, while handlers decide where to send it.

Let’s see this in action. First, we’ll disable the default “last resort” handler to see what’s really happening:

import sys

# Save the original for later
original_last_resort = logging.lastResort

# Disable it (don't do this in production!)
logging.lastResort = None

testlogger = logging.getLogger('testlogger')
testlogger.critical('Help! Can anyone hear me?')

Nothing! Without handlers, the logger has nowhere to send messages. Let’s fix that by adding a StreamHandler:

handler = logging.StreamHandler(sys.stderr)
testlogger.addHandler(handler)

testlogger.critical('Now you can hear me!')

We can add multiple handlers to the same logger:

file_handler = logging.FileHandler('application.log')
testlogger.addHandler(file_handler)

testlogger.error('This goes to both stderr AND the file!')

Now the message is sent to both stderr (which we can see) and the file application.log (check it out—it’s really there!).

Different handlers exist for different purposes:

  • StreamHandler: Write to stdout or stderr
  • FileHandler: Write to a file
  • RotatingFileHandler: Write to a file, rotating when it gets too large
  • TimedRotatingFileHandler: Rotate files based on time
  • SMTPHandler: Send emails (for critical errors!)
  • SysLogHandler: Send to system logs
  • And many more in the documentation

You can even create your own custom handlers by subclassing logging.Handler.

The Duplicate Handler Trap

Be careful not to call addHandler() multiple times with the same handler (or equivalent handlers)! Each time you add a handler, it gets another chance to process the message. This is a common source of duplicate log messages that confuses beginners.

logger.addHandler(handler)
logger.addHandler(handler)  # Oops! Now every message appears twice
Exercise

Create a logger that: 1. Writes all messages to a file called all.log 2. Also writes messages to stderr

Test it with messages at different severity levels. Check that all.log contains the messages.

Log Levels: The Filter Chain

Both loggers and handlers can filter messages by severity level. This creates a two-stage filtering process:

  1. Logger level: “Is this message important enough for me to even consider?”
  2. Handler level: “Should I specifically handle this message?”

A message must pass both filters to appear.

Logger Levels

Every logger has a level. Messages below this level are ignored:

mylogger = logging.getLogger('mylogger')
mylogger.level  # 0 means "not set"
0

When a logger’s level is 0 (not set), it inherits the level from its parent. Eventually, this climbs up to the root logger:

mylogger.parent.level  # The root logger defaults to WARNING (30)
30

You can see the effective level—the level the logger actually uses:

mylogger.getEffectiveLevel()
30

So by default, loggers only process warnings and above (severity 30+):

# Set up a handler so we can see output
mylogger.addHandler(logging.StreamHandler(sys.stderr))

mylogger.debug('Debug message')      # Won't show (10 < 30)
mylogger.info('Info message')        # Won't show (20 < 30)  
mylogger.warning('Warning message')  # Will show (30 >= 30)

Let’s change the logger’s level to see more messages:

mylogger.setLevel(logging.INFO)
mylogger.getEffectiveLevel()
20
mylogger.info('Now info shows up!')
mylogger.debug('But debug still does not')

Set it to DEBUG to see everything:

mylogger.setLevel(logging.DEBUG)
mylogger.debug('Finally, debug messages!')

Handler Levels

Handlers have their own levels too! This acts as an additional filter:

# module/example.py

import logging
import sys

logger = logging.getLogger('example')
logger.setLevel(logging.DEBUG)  # Logger accepts everything

# Handler 1: stderr shows INFO and above
stderr_handler = logging.StreamHandler(sys.stderr)
stderr_handler.setLevel(logging.INFO)
logger.addHandler(stderr_handler)

# Handler 2: file captures DEBUG and above
file_handler = logging.FileHandler('debug.log')
file_handler.setLevel(logging.DEBUG)
logger.addHandler(file_handler)

logger.debug('Only in the file')
logger.info('Both places')
logger.error('Both places')

In this example:

  • Debug messages go only to the file
  • Info and above go to both stderr and the file

This is incredibly powerful for production: show only important messages to users while logging everything for later analysis.

Exercise

Create two loggers:

  1. logger1:
    • Print EVERYTHING to stderr
    • Write WARNING and higher to file1.txt
  2. logger2:
    • Print EVERYTHING to stderr
    • Write ERROR and higher to file2.txt

Test them with messages at all severity levels.

Propagation: The Family Meeting

When a child logger processes a message, it doesn’t just use its own handlers—it also passes the message to its parent. This is called propagation.

Let’s see this in action:

import sys
import logging

parent = logging.getLogger('parent')
child = logging.getLogger('parent.child')

# Only the parent has a handler
parent.addHandler(logging.StreamHandler(sys.stderr))

# But the child's messages still appear!
child.warning('Message from child')

The child logger has no handlers, but the message still appears because it propagates up to the parent, which does have a handler.

Understanding Propagation Better

Let’s create a custom handler to see exactly what’s happening:

class LabeledHandler(logging.StreamHandler):
    """A handler that shows which logger it belongs to"""
    def __init__(self, stream, label):
        super().__init__(stream)
        self.label = label
    
    def format(self, record):
        msg = super().format(record)
        return f'[{self.label}] {msg}'

Now let’s set up a parent-child relationship:

parent = logging.getLogger('parent')
parent.setLevel(logging.DEBUG)
parent.addHandler(LabeledHandler(sys.stderr, 'PARENT HANDLER'))

child = logging.getLogger('parent.child')
child.addHandler(LabeledHandler(sys.stderr, 'CHILD HANDLER'))

child.warning('Message from child')

Both handlers fire! First the child’s own handlers, then the parent’s handlers (via propagation).

You can disable propagation if needed:

child.propagate = False
child.warning('Message from child (no propagation)')
When to Disable Propagation

Disabling propagation is useful when you want complete control over where a specific logger’s messages go. For example, you might want to send all database logs to a separate file without mixing them with the main application logs.

Formatters: Making It Pretty

So far our log messages have been quite plain. Formatters let you customize how messages appear.

A formatter is attached to a handler and specifies the message format:

import logging
import sys

logger = logging.getLogger('formatted')
handler = logging.StreamHandler(sys.stderr)

formatter = logging.Formatter('%(levelname)s - %(message)s')
handler.setFormatter(formatter)

logger.addHandler(handler)
logger.warning('This message is formatted!')

You can include all sorts of useful information using special format codes:

formatter = logging.Formatter(
    '[%(levelname)s] %(asctime)s - %(name)s - %(message)s'
)
handler.setFormatter(formatter)

logger.info('Much more informative!')

Here are some commonly used format attributes:

Attribute Description
%(message)s The log message itself
%(levelname)s Severity level (DEBUG, INFO, etc.)
%(asctime)s Timestamp
%(name)s Logger name
%(filename)s Filename where logging call was made
%(lineno)d Line number of logging call
%(funcName)s Function name
%(pathname)s Full pathname of source file
%(process)d Process ID
%(thread)d Thread ID

See the full list in the documentation.

For production applications, you typically want detailed formatters for files and simple formatters for console output:

# module/production_setup.py

import logging

logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG)

# Console: simple and clean
console = logging.StreamHandler()
console.setLevel(logging.INFO)
console.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(console)

# File: detailed for debugging
file_handler = logging.FileHandler('myapp.log')
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(logging.Formatter(
    '%(asctime)s - %(name)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'
))
logger.addHandler(file_handler)

Create a logger with a handler that formats messages to show:

  • The timestamp
  • The severity level
  • The logger name
  • The message

Test it with various messages.

Design Patterns in Logging

The logging module is a beautiful example of several classic design patterns working together:

The Singleton / Multiton Pattern

When you call getLogger('name') multiple times with the same name, you always get the same logger object back. This ensures consistent logging across your entire application without passing logger objects around.

The Chain of Responsibility Pattern

Logger propagation is a textbook example of this pattern. A logger processes a message with its handlers, then passes it to the next logger in the chain (its parent), which does the same, and so on up to the root logger.

The Strategy Pattern

Handlers represent different strategies for outputting log messages. You can swap strategies (write to file vs. send email vs. write to database) without changing the logger itself. Each handler encapsulates one strategy.

The Template Method Pattern

Formatters define the “template” for how messages should look. The logging system fills in the template with actual values at runtime.

These patterns working together give logging its incredible flexibility. You can reconfigure the entire logging behavior of an application without touching a single logging call!

Library Authors vs. Library Users

Here’s where many Python projects go wrong. There’s a social contract in Python’s logging ecosystem:

The Golden Rule

Library authors must NEVER configure handlers.

Only the application (the library user) should configure handlers. Libraries should only create loggers and use them to log messages.

Why this rule? Because if your library adds a handler, you’re forcing your opinion about where logs should go onto everyone who uses your library. Maybe they want logs in a file. Maybe they want them in a database. Maybe they don’t want to see your library’s logs at all!

The Right Way for Library Authors

If you’re writing a library called superware, here’s what you should do:

# superware/database.py

import logging

# Use __name__ to automatically match package structure
logger = logging.getLogger(__name__)

def connect_to_database():
    logger.info('Connecting to database...')
    # Do the actual connection
    logger.debug('Connection established with config: %s', config)
# superware/api.py

import logging

logger = logging.getLogger(__name__)

def fetch_data():
    logger.debug('Fetching data from API')
    # Fetch the data
    logger.info('Received %d records', len(records))

Notice:

  • We create loggers with __name__, which will be 'superware.database' and 'superware.api'
  • We never call addHandler() or setLevel() or setFormatter()
  • We just log messages and trust the user to configure things

The Right Way for Library Users

When you use superware, you configure logging:

# myapp.py

import logging
import superware

# Configure the root logger or the specific library's logger
logger = logging.getLogger('superware')
logger.setLevel(logging.DEBUG)

handler = logging.FileHandler('superware.log')
handler.setFormatter(logging.Formatter(
    '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
))
logger.addHandler(handler)

# Now use the library - its logs will appear as configured
superware.do_something()

Because superware.database and superware.api are children of superware, they’ll inherit this configuration via propagation. The user has complete control!

What About logging.basicConfig()?

You might have seen logging.basicConfig(). It’s a convenient function for simple cases:

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(levelname)s - %(message)s',
    filename='app.log'
)

This does basic configuration by adding a handler to the root logger. It’s fine for simple scripts, but:

  • It only works once (unless you use force=True)
  • It configures the root logger, affecting all loggers
  • It’s too coarse-grained for most applications

For anything beyond a simple script, explicit configuration is clearer and more flexible.

Quick Rule of Thumb
  • Simple scripts: logging.basicConfig() is fine
  • Libraries: Never configure handlers, just create and use loggers
  • Applications: Explicitly configure loggers and handlers for full control

Alternative Perspectives: The Loguru Movement

Not everyone loves the standard logging module. Some find it too complex, too verbose, and too easy to misconfigure. Enter loguru:

# Standard logging
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - %(message)s'))
logger.addHandler(handler)
logger.info('Hello')
# Loguru
from loguru import logger
logger.info('Hello')

I don’t have any practical experience with Loguru myself, but it claims to provide:

  • Sensible defaults out of the box
  • Simpler API with less boilerplate
  • Colored output by default
  • Better exception formatting
  • Easier rotation and retention

Common Pitfalls

Let’s cover some mistakes that even experienced Python developers make:

Pitfall 1: Using the Root Logger

# Don't do this
logging.info('Something happened')

This uses the root logger. Better to create a named logger:

# Do this instead
logger = logging.getLogger(__name__)
logger.info('Something happened')

Pitfall 2: String Formatting Too Early

# Wasteful - formats even if message is filtered
logger.debug(f'Processing {large_object}')

# Better - only formats if message will be shown
logger.debug('Processing %s', large_object)

The second version only converts large_object to a string if the message actually gets logged. This matters in tight loops! 2

Pitfall 3: Adding Handlers Multiple Times

# This gets called multiple times somehow
def setup_logging():
    logger = logging.getLogger('myapp')
    logger.addHandler(logging.StreamHandler())  # Oops!

setup_logging()
setup_logging()  # Now every message appears twice!

Always check if handlers exist before adding:

def setup_logging():
    logger = logging.getLogger('myapp')
    if not logger.handlers:
        logger.addHandler(logging.StreamHandler())

Pitfall 4: Not Understanding Propagation

If you see duplicate messages, it’s probably propagation:

import logging
import sys

root = logging.getLogger()
root.addHandler(logging.StreamHandler(sys.stderr))

child = logging.getLogger('myapp')
child.addHandler(logging.StreamHandler(sys.stderr))

child.warning('This appears twice!')

The message appears twice because it goes through the child’s handler, then propagates to the root’s handler.

The Challenge: Building Superware

Time to put it all together! You’re going to create a Python package called superware with proper logging configuration.

The Package Structure

Create this structure:

superware/
  __init__.py
  golden/
    __init__.py
    gold1.py
  silver/
    __init__.py
    silver1.py
test_superware.py

The Requirements

Your logging must:

  1. All ERROR and highererrors.log with full details (timestamp, logger name, level, filename, line number, message)
  2. All WARNING and higher → stderr with simple format (just level and message)
  3. All output from golden.gold1 (including DEBUG) → golden1.log
  4. All output from silver.silver1 (including DEBUG) → silver1.log

Starter Code

Use this as a starting point:

# superware/__init__.py

import logging

# You could add initialization here if needed
# superware/golden/__init__.py

# Empty or with package initialization
# superware/golden/gold1.py

import logging

logger = logging.getLogger(__name__)

def process_gold():
    logger.debug('Starting gold processing')
    logger.info('Processing gold item')
    logger.warning('Gold purity is below threshold')
    logger.error('Gold processing failed')
# superware/silver/__init__.py

# Empty or with package initialization
# superware/silver/silver1.py

import logging

logger = logging.getLogger(__name__)

def process_silver():
    logger.debug('Starting silver processing')
    logger.info('Processing silver item')
    logger.warning('Silver needs polishing')
    logger.error('Silver processing failed')
# test_superware.py

import logging
import sys
from superware.golden import gold1
from superware.silver import silver1

# YOUR CONFIGURATION CODE HERE
# Set up the handlers and formatters to meet the requirements

# Test the logging
print('=== Testing Gold Processing ===')
gold1.process_gold()

print('\n=== Testing Silver Processing ===')
silver1.process_silver()

print('\n=== Check the log files! ===')
print('errors.log should have ERROR messages with full details')
print('golden1.log should have all messages from gold1.py')
print('silver1.log should have all messages from silver1.py')

Hints

  • Remember the logger hierarchy: superware.golden.gold1 is a child of superware.golden
  • You’ll need at least 4 handlers (errors, stderr, golden1, silver1)
  • Use setLevel() on both loggers and handlers
  • The formatters for errors.log and the individual module logs should be detailed
  • The stderr formatter should be simple

Solution Approach

Don’t look at this until you’ve tried! But if you get stuck, here’s the general approach:

  1. Create a handler for errors.log with level ERROR and detailed formatter
  2. Create a handler for stderr with level WARNING and simple formatter
  3. Add both to the root logger (or superware logger)
  4. Create a handler for golden1.log with level DEBUG
  5. Get the superware.golden.gold1 logger and add the handler
  6. Set that logger’s level to DEBUG and disable propagation
  7. Repeat steps 4-6 for silver1.log
Exercise

Implement the test_superware.py configuration section. Run it with:

python test_superware.py

Check that all three log files are created with the right content, and that stderr shows the right messages.

Going Further

The logging module has even more features we haven’t covered:

  • Filters: More sophisticated filtering than just level (filter by message content, add context, etc.)
  • LogRecord: The object that represents a single log event
  • Dictionary-based configuration: Configure logging from a dict or JSON file
  • Thread-safe logging: Logging is thread-safe by default
  • Structured logging 3

For more advanced topics, see:

Summary

The logging module might seem complex at first, but it’s built on three simple concepts:

  1. Loggers decide whether a message is important enough to process
  2. Handlers decide where the message should go
  3. Formatters decide what the message should look like

Add in the logger hierarchy and propagation, and you have an incredibly flexible system that can handle everything from simple scripts to complex distributed systems.

Remember the key lessons:

  • Use logging.getLogger(__name__) to create loggers that match your package structure
  • Library authors: create loggers but never configure handlers
  • Application developers: configure loggers and handlers explicitly
  • Don’t add the same handler twice
  • Use lazy string formatting: logger.debug('Processing %s', obj)
  • Understand propagation to avoid duplicate messages

Additional Resources

Footnotes

  1. Sometimes called the Multiton pattern↩︎

  2. For more on performance, see this logging cookbook entry.↩︎

  3. For JSON output and structured logging, check out python-json-logger or consider loguru’s serialize option.↩︎