Home / Notebooks / Python
Python
intermediate

Logging in Python

Master application logging in Python - from basic setup to production-ready configurations with handlers, formatters, and best practices

April 21, 2026
Updated regularly

Logging in Python

Logging is essential for understanding what's happening in your application, debugging issues, and monitoring production systems. Python's built-in logging module provides a flexible and powerful way to track events in your code.

What is Logging?

Logging is the practice of recording information about your program's execution. Unlike print statements, logs are:

  • Persistent (saved to files)
  • Structured (with levels and formatting)
  • Configurable (can be turned on/off)
  • Production-ready (won't clutter user output)
  • Why Use Logging?

  • Debug issues in development
  • Monitor application behavior in production
  • Track errors and exceptions
  • Audit user actions
  • Analyze performance
  • Investigate security incidents
  • Logging vs Print:
    # ❌ Bad: Using print
    print("User logged in")
    print("Error: Database connection failed")
    
    # ✅ Good: Using logging
    logger.info("User logged in")
    logger.error("Database connection failed")
    

    Logging Levels

    Python has five standard logging levels:

    LevelNumeric ValueUsage
    DEBUG10Detailed diagnostic information
    INFO20General informational messages
    WARNING30Warning messages (potential issues)
    ERROR40Error messages (something failed)
    CRITICAL50Critical errors (system failure)
    import logging
    
    logger = logging.getLogger(__name__)
    
    logger.debug("Detailed information for debugging")
    logger.info("General information about program execution")
    logger.warning("Something unexpected but not critical")
    logger.error("An error occurred, operation failed")
    logger.critical("Critical failure, system may be unstable")
    

    Basic Logging

    Quick Start

    import logging
    
    # Basic configuration
    logging.basicConfig(level=logging.INFO)
    
    # Create logger
    logger = logging.getLogger(__name__)
    
    # Log messages
    logger.debug("This won't show (level too low)")
    logger.info("Application started")
    logger.warning("Low disk space")
    logger.error("Failed to connect to database")
    logger.critical("System crash imminent")
    
    Output:
    INFO:__main__:Application started
    WARNING:__main__:Low disk space
    ERROR:__main__:Failed to connect to database
    CRITICAL:__main__:System crash imminent
    

    Configure Format

    import logging
    
    logging.basicConfig(
        level=logging.INFO,
        format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
    )
    
    logger = logging.getLogger(__name__)
    logger.info("User login successful")
    
    Output:
    2024-01-15 10:30:45,123 - __main__ - INFO - User login successful
    

    Log to File

    import logging
    
    logging.basicConfig(
        level=logging.INFO,
        format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
        filename='app.log',
        filemode='a'  # 'a' for append, 'w' for overwrite
    )
    
    logger = logging.getLogger(__name__)
    logger.info("This goes to app.log")
    

    Advanced Configuration

    Logger Hierarchy

    import logging
    
    # Root logger
    root_logger = logging.getLogger()
    
    # Module logger
    logger = logging.getLogger(__name__)
    
    # Sub-module logger
    sub_logger = logging.getLogger(__name__ + '.submodule')
    
    # Logger hierarchy: root -> myapp -> myapp.database -> myapp.database.connection
    app_logger = logging.getLogger('myapp')
    db_logger = logging.getLogger('myapp.database')
    conn_logger = logging.getLogger('myapp.database.connection')
    

    Custom Logger Setup

    import logging
    
    def setup_logger(name, log_file, level=logging.INFO):
        """Function to setup a logger with file and console handlers"""
        
        # Create logger
        logger = logging.getLogger(name)
        logger.setLevel(level)
        
        # Create file handler
        file_handler = logging.FileHandler(log_file)
        file_handler.setLevel(level)
        
        # Create console handler
        console_handler = logging.StreamHandler()
        console_handler.setLevel(level)
        
        # Create formatter
        formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        )
        
        # Add formatter to handlers
        file_handler.setFormatter(formatter)
        console_handler.setFormatter(formatter)
        
        # Add handlers to logger
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
        
        return logger
    
    # Usage
    logger = setup_logger('myapp', 'myapp.log')
    logger.info("Application started")
    

    Handlers

    Handlers send log records to different destinations.

    File Handler

    import logging
    
    logger = logging.getLogger(__name__)
    
    # Simple file handler
    file_handler = logging.FileHandler('app.log')
    file_handler.setLevel(logging.ERROR)
    logger.addHandler(file_handler)
    

    Rotating File Handler

    from logging.handlers import RotatingFileHandler
    
    # Rotate when file reaches 10MB, keep 5 backup files
    handler = RotatingFileHandler(
        'app.log',
        maxBytes=10*1024*1024,  # 10 MB
        backupCount=5
    )
    
    logger.addHandler(handler)
    

    Timed Rotating File Handler

    from logging.handlers import TimedRotatingFileHandler
    
    # Rotate daily at midnight, keep 7 days of logs
    handler = TimedRotatingFileHandler(
        'app.log',
        when='midnight',
        interval=1,
        backupCount=7
    )
    
    logger.addHandler(handler)
    

    Multiple Handlers

    import logging
    from logging.handlers import RotatingFileHandler
    
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)
    
    # Console handler - INFO and above
    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    console_format = logging.Formatter('%(levelname)s - %(message)s')
    console_handler.setFormatter(console_format)
    
    # File handler - DEBUG and above
    file_handler = RotatingFileHandler('debug.log', maxBytes=5*1024*1024, backupCount=3)
    file_handler.setLevel(logging.DEBUG)
    file_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    file_handler.setFormatter(file_format)
    
    # Error file handler - ERROR and above
    error_handler = logging.FileHandler('error.log')
    error_handler.setLevel(logging.ERROR)
    error_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s\n%(pathname)s:%(lineno)d')
    error_handler.setFormatter(error_format)
    
    # Add handlers
    logger.addHandler(console_handler)
    logger.addHandler(file_handler)
    logger.addHandler(error_handler)
    
    # Log at different levels
    logger.debug("This goes to file only")
    logger.info("This goes to console and file")
    logger.error("This goes to console, file, and error.log")
    

    Syslog Handler

    from logging.handlers import SysLogHandler
    
    handler = SysLogHandler(address=('localhost', 514))
    logger.addHandler(handler)
    

    Email Handler

    from logging.handlers import SMTPHandler
    
    mail_handler = SMTPHandler(
        mailhost=('smtp.example.com', 587),
        fromaddr='app@example.com',
        toaddrs=['admin@example.com'],
        subject='Application Error',
        credentials=('username', 'password'),
        secure=()
    )
    mail_handler.setLevel(logging.ERROR)
    logger.addHandler(mail_handler)
    

    Formatters

    Formatters define the structure of log messages.

    Format String Attributes

    # Common attributes
    formatter = logging.Formatter(
        '%(asctime)s - '          # Timestamp
        '%(name)s - '             # Logger name
        '%(levelname)s - '        # Log level
        '%(message)s'             # Log message
    )
    
    # Detailed format with file info
    formatter = logging.Formatter(
        '%(asctime)s - '
        '%(name)s - '
        '%(levelname)s - '
        '%(filename)s:%(lineno)d - '  # File and line number
        '%(funcName)s() - '           # Function name
        '%(message)s'
    )
    
    # JSON format
    formatter = logging.Formatter(
        '{"time": "%(asctime)s", '
        '"name": "%(name)s", '
        '"level": "%(levelname)s", '
        '"message": "%(message)s"}'
    )
    

    Custom Formatter

    import logging
    import json
    from datetime import datetime
    
    class JSONFormatter(logging.Formatter):
        def format(self, record):
            log_data = {
                'timestamp': datetime.utcnow().isoformat(),
                'level': record.levelname,
                'logger': record.name,
                'message': record.getMessage(),
                'module': record.module,
                'function': record.funcName,
                'line': record.lineno
            }
            
            if record.exc_info:
                log_data['exception'] = self.formatException(record.exc_info)
            
            return json.dumps(log_data)
    
    # Usage
    handler = logging.StreamHandler()
    handler.setFormatter(JSONFormatter())
    logger.addHandler(handler)
    

    Colored Output

    import logging
    
    class ColoredFormatter(logging.Formatter):
        """Colored log formatter for console output"""
        
        COLORS = {
            'DEBUG': '\033[94m',    # Blue
            'INFO': '\033[92m',     # Green
            'WARNING': '\033[93m',  # Yellow
            'ERROR': '\033[91m',    # Red
            'CRITICAL': '\033[95m', # Magenta
        }
        RESET = '\033[0m'
        
        def format(self, record):
            log_color = self.COLORS.get(record.levelname, self.RESET)
            record.levelname = f"{log_color}{record.levelname}{self.RESET}"
            return super().format(record)
    
    # Usage
    handler = logging.StreamHandler()
    handler.setFormatter(ColoredFormatter('%(levelname)s - %(message)s'))
    logger.addHandler(handler)
    

    Logging Exceptions

    Log Exceptions with Traceback

    import logging
    
    logger = logging.getLogger(__name__)
    
    try:
        result = 10 / 0
    except ZeroDivisionError:
        logger.error("Division by zero occurred", exc_info=True)
        # Or use logger.exception() which automatically includes exc_info
        logger.exception("Division by zero occurred")
    
    Output:
    ERROR - Division by zero occurred
    Traceback (most recent call last):
      File "script.py", line 6, in <module>
        result = 10 / 0
    ZeroDivisionError: division by zero
    

    Exception Logging Helper

    import logging
    import functools
    
    def log_exceptions(logger):
        """Decorator to log exceptions"""
        def decorator(func):
            @functools.wraps(func)
            def wrapper(*args, **kwargs):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    logger.exception(f"Exception in {func.__name__}: {str(e)}")
                    raise
            return wrapper
        return decorator
    
    # Usage
    logger = logging.getLogger(__name__)
    
    @log_exceptions(logger)
    def risky_operation(x, y):
        return x / y
    
    risky_operation(10, 0)  # Exception will be logged
    

    Configuration Files

    Using Configuration Dictionary

    import logging
    import logging.config
    
    LOGGING_CONFIG = {
        'version': 1,
        'disable_existing_loggers': False,
        'formatters': {
            'standard': {
                'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
            },
            'detailed': {
                'format': '%(asctime)s - %(name)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'
            },
        },
        'handlers': {
            'console': {
                'class': 'logging.StreamHandler',
                'level': 'INFO',
                'formatter': 'standard',
                'stream': 'ext://sys.stdout'
            },
            'file': {
                'class': 'logging.handlers.RotatingFileHandler',
                'level': 'DEBUG',
                'formatter': 'detailed',
                'filename': 'app.log',
                'maxBytes': 10485760,  # 10MB
                'backupCount': 5
            },
            'error_file': {
                'class': 'logging.FileHandler',
                'level': 'ERROR',
                'formatter': 'detailed',
                'filename': 'error.log'
            }
        },
        'loggers': {
            'myapp': {
                'level': 'DEBUG',
                'handlers': ['console', 'file', 'error_file'],
                'propagate': False
            }
        },
        'root': {
            'level': 'INFO',
            'handlers': ['console']
        }
    }
    
    # Apply configuration
    logging.config.dictConfig(LOGGING_CONFIG)
    
    # Use logger
    logger = logging.getLogger('myapp')
    logger.info("Application started")
    

    Using Configuration File (INI format)

    logging.ini:
    [loggers]
    keys=root,myapp
    
    [handlers]
    keys=console,file,error
    
    [formatters]
    keys=simple,detailed
    
    [logger_root]
    level=INFO
    handlers=console
    
    [logger_myapp]
    level=DEBUG
    handlers=console,file,error
    qualname=myapp
    propagate=0
    
    [handler_console]
    class=StreamHandler
    level=INFO
    formatter=simple
    args=(sys.stdout,)
    
    [handler_file]
    class=handlers.RotatingFileHandler
    level=DEBUG
    formatter=detailed
    args=('app.log', 'a', 10485760, 5)
    
    [handler_error]
    class=FileHandler
    level=ERROR
    formatter=detailed
    args=('error.log', 'a')
    
    [formatter_simple]
    format=%(levelname)s - %(message)s
    
    [formatter_detailed]
    format=%(asctime)s - %(name)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s
    
    Load configuration:
    import logging
    import logging.config
    
    logging.config.fileConfig('logging.ini')
    logger = logging.getLogger('myapp')
    logger.info("Application started")
    

    Using YAML Configuration

    pip install pyyaml
    
    logging.yaml:
    version: 1
    disable_existing_loggers: False
    
    formatters:
      simple:
        format: '%(levelname)s - %(message)s'
      detailed:
        format: '%(asctime)s - %(name)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'
    
    handlers:
      console:
        class: logging.StreamHandler
        level: INFO
        formatter: simple
        stream: ext://sys.stdout
      
      file:
        class: logging.handlers.RotatingFileHandler
        level: DEBUG
        formatter: detailed
        filename: app.log
        maxBytes: 10485760  # 10MB
        backupCount: 5
      
      error:
        class: logging.FileHandler
        level: ERROR
        formatter: detailed
        filename: error.log
    
    loggers:
      myapp:
        level: DEBUG
        handlers: [console, file, error]
        propagate: no
    
    root:
      level: INFO
      handlers: [console]
    
    Load YAML configuration:
    import logging
    import logging.config
    import yaml
    
    with open('logging.yaml', 'r') as f:
        config = yaml.safe_load(f)
        logging.config.dictConfig(config)
    
    logger = logging.getLogger('myapp')
    logger.info("Application started")
    

    Practical Examples

    Example 1: Web Application Logging

    import logging
    from logging.handlers import RotatingFileHandler
    from flask import Flask, request, g
    import time
    import uuid
    
    app = Flask(__name__)
    
    # Configure logging
    def setup_logging():
        # Create logs directory
        import os
        os.makedirs('logs', exist_ok=True)
        
        # Application logger
        app_logger = logging.getLogger('myapp')
        app_logger.setLevel(logging.DEBUG)
        
        # File handler for all logs
        file_handler = RotatingFileHandler(
            'logs/app.log',
            maxBytes=10*1024*1024,
            backupCount=10
        )
        file_handler.setLevel(logging.DEBUG)
        file_handler.setFormatter(logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        ))
        
        # Error file handler
        error_handler = RotatingFileHandler(
            'logs/error.log',
            maxBytes=10*1024*1024,
            backupCount=10
        )
        error_handler.setLevel(logging.ERROR)
        error_handler.setFormatter(logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(pathname)s:%(lineno)d - %(message)s'
        ))
        
        # Console handler
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.INFO)
        console_handler.setFormatter(logging.Formatter(
            '%(levelname)s - %(message)s'
        ))
        
        app_logger.addHandler(file_handler)
        app_logger.addHandler(error_handler)
        app_logger.addHandler(console_handler)
        
        return app_logger
    
    logger = setup_logging()
    
    # Request logging middleware
    @app.before_request
    def before_request():
        g.start_time = time.time()
        g.request_id = str(uuid.uuid4())
        
        logger.info(
            f"Request started: {request.method} {request.path} "
            f"[{g.request_id}] from {request.remote_addr}"
        )
    
    @app.after_request
    def after_request(response):
        if hasattr(g, 'start_time'):
            elapsed = time.time() - g.start_time
            logger.info(
                f"Request completed: {request.method} {request.path} "
                f"[{g.request_id}] status={response.status_code} "
                f"duration={elapsed:.3f}s"
            )
        return response
    
    # Error logging
    @app.errorhandler(Exception)
    def handle_exception(e):
        logger.exception(f"Unhandled exception [{g.request_id}]: {str(e)}")
        return {"error": "Internal server error"}, 500
    
    # Route example
    @app.route('/users/<int:user_id>')
    def get_user(user_id):
        logger.debug(f"Fetching user {user_id}")
        
        try:
            # Simulate database query
            user = fetch_user_from_db(user_id)
            logger.info(f"User {user_id} retrieved successfully")
            return {"user": user}
        except UserNotFound:
            logger.warning(f"User {user_id} not found")
            return {"error": "User not found"}, 404
        except Exception as e:
            logger.error(f"Error fetching user {user_id}: {str(e)}", exc_info=True)
            raise
    

    Example 2: Background Task Logger

    import logging
    from logging.handlers import TimedRotatingFileHandler
    import time
    from datetime import datetime
    
    class TaskLogger:
        """Logger for background tasks with task-specific log files"""
        
        def __init__(self, task_name, log_dir='logs/tasks'):
            self.task_name = task_name
            self.log_dir = log_dir
            self.logger = self._setup_logger()
        
        def _setup_logger(self):
            import os
            os.makedirs(self.log_dir, exist_ok=True)
            
            logger = logging.getLogger(f'task.{self.task_name}')
            logger.setLevel(logging.DEBUG)
            
            # Task-specific log file
            log_file = f'{self.log_dir}/{self.task_name}.log'
            handler = TimedRotatingFileHandler(
                log_file,
                when='midnight',
                interval=1,
                backupCount=30
            )
            handler.setFormatter(logging.Formatter(
                '%(asctime)s - %(levelname)s - %(message)s'
            ))
            
            logger.addHandler(handler)
            return logger
        
        def start(self):
            """Log task start"""
            self.logger.info(f"Task '{self.task_name}' started")
            self.start_time = time.time()
        
        def progress(self, current, total, message=""):
            """Log task progress"""
            percentage = (current / total) * 100
            self.logger.info(
                f"Progress: {current}/{total} ({percentage:.1f}%) {message}"
            )
        
        def complete(self, success=True, message=""):
            """Log task completion"""
            duration = time.time() - self.start_time
            status = "completed" if success else "failed"
            self.logger.info(
                f"Task '{self.task_name}' {status} in {duration:.2f}s {message}"
            )
        
        def error(self, error_msg, exc_info=False):
            """Log task error"""
            self.logger.error(error_msg, exc_info=exc_info)
    
    # Usage
    def process_data():
        task_logger = TaskLogger('data_processing')
        task_logger.start()
        
        try:
            data = fetch_data()
            total = len(data)
            
            for i, item in enumerate(data, 1):
                process_item(item)
                
                if i % 100 == 0:
                    task_logger.progress(i, total, f"Processing {item}")
            
            task_logger.complete(success=True, message=f"Processed {total} items")
            
        except Exception as e:
            task_logger.error(f"Task failed: {str(e)}", exc_info=True)
            task_logger.complete(success=False)
            raise
    

    Example 3: Structured Logging with Context

    import logging
    import json
    from contextvars import ContextVar
    from datetime import datetime
    
    # Context variables for request tracking
    request_id_var = ContextVar('request_id', default=None)
    user_id_var = ContextVar('user_id', default=None)
    
    class ContextFilter(logging.Filter):
        """Add context information to log records"""
        
        def filter(self, record):
            record.request_id = request_id_var.get()
            record.user_id = user_id_var.get()
            return True
    
    class StructuredFormatter(logging.Formatter):
        """Format logs as JSON with context"""
        
        def format(self, record):
            log_data = {
                'timestamp': datetime.utcnow().isoformat(),
                'level': record.levelname,
                'logger': record.name,
                'message': record.getMessage(),
                'module': record.module,
                'function': record.funcName,
                'line': record.lineno,
            }
            
            # Add context
            if hasattr(record, 'request_id') and record.request_id:
                log_data['request_id'] = record.request_id
            if hasattr(record, 'user_id') and record.user_id:
                log_data['user_id'] = record.user_id
            
            # Add exception info if present
            if record.exc_info:
                log_data['exception'] = self.formatException(record.exc_info)
            
            # Add extra fields
            if hasattr(record, 'extra_data'):
                log_data['extra'] = record.extra_data
            
            return json.dumps(log_data)
    
    # Setup structured logger
    def setup_structured_logger():
        logger = logging.getLogger('structured')
        logger.setLevel(logging.INFO)
        
        handler = logging.FileHandler('logs/structured.log')
        handler.setFormatter(StructuredFormatter())
        handler.addFilter(ContextFilter())
        
        logger.addHandler(handler)
        return logger
    
    logger = setup_structured_logger()
    
    # Usage in web application
    def handle_request(request_id, user_id):
        # Set context
        request_id_var.set(request_id)
        user_id_var.set(user_id)
        
        # Log with context
        logger.info("Processing request")
        
        # Log with extra data
        logger.info(
            "User action",
            extra={'extra_data': {'action': 'login', 'ip': '192.168.1.1'}}
        )
    

    Example 4: Performance Logging

    import logging
    import time
    import functools
    from contextlib import contextmanager
    
    logger = logging.getLogger(__name__)
    
    @contextmanager
    def log_execution_time(operation_name):
        """Context manager to log execution time"""
        start_time = time.time()
        logger.info(f"Starting: {operation_name}")
        
        try:
            yield
        finally:
            elapsed = time.time() - start_time
            logger.info(f"Completed: {operation_name} in {elapsed:.3f}s")
    
    def log_performance(func):
        """Decorator to log function performance"""
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            start_time = time.time()
            logger.debug(f"Calling {func.__name__} with args={args}, kwargs={kwargs}")
            
            try:
                result = func(*args, **kwargs)
                elapsed = time.time() - start_time
                logger.info(f"{func.__name__} completed in {elapsed:.3f}s")
                return result
            except Exception as e:
                elapsed = time.time() - start_time
                logger.error(
                    f"{func.__name__} failed after {elapsed:.3f}s: {str(e)}",
                    exc_info=True
                )
                raise
        
        return wrapper
    
    # Usage with context manager
    def process_large_file(filename):
        with log_execution_time(f"Processing {filename}"):
            # ... processing logic
            time.sleep(2)  # Simulate work
    
    # Usage with decorator
    @log_performance
    def calculate_statistics(data):
        # ... calculation logic
        time.sleep(1)  # Simulate work
        return {"mean": 42, "median": 40}
    

    Best Practices

    1. Use Appropriate Log Levels

    # DEBUG: Detailed diagnostic information
    logger.debug(f"Processing item {item_id} with config {config}")
    
    # INFO: General informational messages
    logger.info(f"User {user_id} logged in successfully")
    
    # WARNING: Potentially harmful situations
    logger.warning(f"Disk space low: {free_space_mb}MB remaining")
    
    # ERROR: Error events that might still allow the app to continue
    logger.error(f"Failed to send email to {email}", exc_info=True)
    
    # CRITICAL: Very severe error events that might cause shutdown
    logger.critical("Database connection pool exhausted")
    

    2. Use __name__ for Logger Names

    # Good: Use __name__ for module-based logger hierarchy
    logger = logging.getLogger(__name__)
    
    # Bad: Hardcoded names
    logger = logging.getLogger('mylogger')
    

    3. Don't Log Sensitive Information

    # ❌ Bad: Logging sensitive data
    logger.info(f"User login: username={username}, password={password}")
    
    # ✅ Good: Don't log sensitive data
    logger.info(f"User login: username={username}")
    
    # ✅ Good: Mask sensitive data
    logger.info(f"Processing credit card: {card_number[:4]}****{card_number[-4:]}")
    

    4. Use Lazy Formatting

    # ❌ Bad: String formatting happens even if not logged
    logger.debug("Processing: " + expensive_operation())
    
    # ✅ Good: String formatting only if logged
    logger.debug("Processing: %s", expensive_operation())
    
    # ✅ Good: Using f-strings is OK (modern Python)
    logger.debug(f"Processing: {operation_result}")
    

    5. Log Exceptions Properly

    # ❌ Bad: Converting exception to string loses traceback
    try:
        risky_operation()
    except Exception as e:
        logger.error(f"Error: {str(e)}")
    
    # ✅ Good: Include traceback
    try:
        risky_operation()
    except Exception as e:
        logger.exception("Error in risky_operation")
        # Or: logger.error("Error", exc_info=True)
    

    6. Configure Once

    # ❌ Bad: Configuring in multiple places
    logging.basicConfig()  # In module A
    logging.basicConfig()  # In module B - has no effect!
    
    # ✅ Good: Configure once at application entry point
    # main.py
    if __name__ == '__main__':
        setup_logging()
        app.run()
    

    7. Use Configuration Files

    # ✅ Good: Externalize configuration
    import logging.config
    import yaml
    
    with open('logging.yaml') as f:
        config = yaml.safe_load(f)
        logging.config.dictConfig(config)
    

    Third-Party Libraries

    Loguru

    pip install loguru
    
    from loguru import logger
    
    # Automatic file rotation
    logger.add("app.log", rotation="500 MB")
    
    # Time-based rotation
    logger.add("app.log", rotation="00:00")  # Daily at midnight
    
    # Colorful output
    logger.debug("Debug message")
    logger.info("Info message")
    logger.warning("Warning message")
    logger.error("Error message")
    
    # Exception catching
    @logger.catch
    def risky_function():
        1 / 0
    
    risky_function()  # Exception is automatically logged
    

    Structlog

    pip install structlog
    
    import structlog
    
    structlog.configure(
        processors=[
            structlog.processors.add_log_level,
            structlog.processors.TimeStamper(fmt="iso"),
            structlog.processors.JSONRenderer()
        ]
    )
    
    logger = structlog.get_logger()
    logger.info("user_action", user_id=123, action="login", ip="192.168.1.1")
    

    Integration with Frameworks

    FastAPI

    import logging
    from fastapi import FastAPI, Request
    import time
    
    app = FastAPI()
    logger = logging.getLogger(__name__)
    
    @app.middleware("http")
    async def log_requests(request: Request, call_next):
        start_time = time.time()
        
        logger.info(f"Request: {request.method} {request.url.path}")
        
        response = await call_next(request)
        
        duration = time.time() - start_time
        logger.info(
            f"Response: {request.method} {request.url.path} "
            f"status={response.status_code} duration={duration:.3f}s"
        )
        
        return response
    

    Django

    # settings.py
    LOGGING = {
        'version': 1,
        'disable_existing_loggers': False,
        'formatters': {
            'verbose': {
                'format': '{levelname} {asctime} {module} {message}',
                'style': '{',
            },
        },
        'handlers': {
            'file': {
                'level': 'DEBUG',
                'class': 'logging.FileHandler',
                'filename': 'debug.log',
                'formatter': 'verbose',
            },
        },
        'loggers': {
            'django': {
                'handlers': ['file'],
                'level': 'INFO',
            },
            'myapp': {
                'handlers': ['file'],
                'level': 'DEBUG',
            },
        },
    }
    

    Testing Logs

    import logging
    import unittest
    
    class TestLogging(unittest.TestCase):
        def setUp(self):
            # Capture logs
            self.log_capture = []
            handler = logging.Handler()
            handler.emit = lambda record: self.log_capture.append(record)
            
            logger = logging.getLogger('myapp')
            logger.addHandler(handler)
            logger.setLevel(logging.DEBUG)
        
        def test_user_login_logs(self):
            # Run function that logs
            user_login('testuser')
            
            # Assert logs
            self.assertEqual(len(self.log_capture), 1)
            self.assertEqual(self.log_capture[0].levelname, 'INFO')
            self.assertIn('testuser', self.log_capture[0].message)
    

    Key Takeaways

  • Use Logging, Not Print - Logging is for applications, print is for scripts
  • Appropriate Levels - Use the right level for each message
  • Module Loggers - Use logging.getLogger(__name__) for each module
  • Configure Once - Set up logging at application entry point
  • Structured Logs - Use structured/JSON logging for production
  • Rotate Files - Use rotating handlers to prevent huge log files
  • Don't Log Secrets - Never log passwords, keys, or sensitive data
  • Exception Info - Use logger.exception() or exc_info=True
  • Additional Resources

    Official Documentation:

  • Python Logging: https://docs.python.org/3/library/logging.html
  • Logging Cookbook: https://docs.python.org/3/howto/logging-cookbook.html
  • Third-Party Libraries:

  • Loguru: https://github.com/Delgan/loguru
  • Structlog: https://www.structlog.org
  • Python JSON Logger: https://github.com/madzak/python-json-logger
  • Best Practices:

  • The Twelve-Factor App: Logs as Event Streams
  • Google Cloud Logging Best Practices
  • Next Steps

  • Set up basic logging in your application
  • Configure file rotation for production
  • Implement structured/JSON logging
  • Add request logging middleware
  • Set up centralized logging (ELK Stack, CloudWatch)
  • Monitor logs in production
  • Create alerting based on log patterns
  • Logging is essential for understanding and maintaining your applications. Start with basic logging and gradually add more sophisticated patterns as your needs grow!

    Topics

    PythonLoggingDebuggingMonitoringBest Practices

    Found This Helpful?

    If you have questions or suggestions for improving these notes, I'd love to hear from you.