Home / Notebooks / Testing
Testing
intermediate

TDD Essentials

Test-Driven Development principles, practices, and workflow

April 20, 2026
Updated regularly

TDD Essentials

Test-Driven Development (TDD) is a software development approach where you write tests before writing the actual code.

The TDD Cycle

The Red-Green-Refactor cycle is the core of TDD:

1. 🔴 RED    → Write a failing test
2. 🟢 GREEN  → Write minimal code to pass
3. 🔵 REFACTOR → Clean up code while keeping tests green

Step-by-Step Example

# ========== STEP 1: RED (Write failing test) ==========
def test_add_two_numbers():
    result = add(2, 3)
    assert result == 5
# Run test → FAILS (add() doesn't exist yet)

# ========== STEP 2: GREEN (Make it pass) ==========
def add(a, b):
    return a + b
# Run test → PASSES

# ========== STEP 3: REFACTOR (Improve if needed) ==========
# Code is already simple, no refactoring needed
# Run test → STILL PASSES

Core Principles

1. Write Tests First

Always write the test before the implementation:

# ❌ WRONG: Code first, then test
def calculate_discount(price, percent):
    return price * (percent / 100)

def test_calculate_discount():
    assert calculate_discount(100, 10) == 10

# ✅ CORRECT: Test first, then code
def test_calculate_discount():
    assert calculate_discount(100, 10) == 10

def calculate_discount(price, percent):
    return price * (percent / 100)

2. Write Minimal Code

Only write enough code to make the test pass:

# Test
def test_is_even():
    assert is_even(2) == True
    assert is_even(3) == False

# ❌ OVER-ENGINEERING: Too complex for requirements
def is_even(n):
    if n < 0:
        n = abs(n)
    return n % 2 == 0 if isinstance(n, int) else False

# ✅ MINIMAL: Just what's needed
def is_even(n):
    return n % 2 == 0

3. One Test at a Time

Focus on one behavior per test:

# ❌ BAD: Testing multiple behaviors
def test_user_validation():
    assert validate_email("test@example.com") == True
    assert validate_password("Pass123!") == True
    assert validate_username("john_doe") == True

# ✅ GOOD: Separate tests for each behavior
def test_validate_email():
    assert validate_email("test@example.com") == True

def test_validate_password():
    assert validate_password("Pass123!") == True

def test_validate_username():
    assert validate_username("john_doe") == True

Testing Patterns

Arrange-Act-Assert (AAA)

Structure every test with three clear sections:

def test_shopping_cart_total():
    # ========== ARRANGE: Set up test data ==========
    cart = ShoppingCart()
    cart.add_item(Item("Apple", 1.50))
    cart.add_item(Item("Banana", 0.75))
    
    # ========== ACT: Execute the behavior ==========
    total = cart.calculate_total()
    
    # ========== ASSERT: Verify the result ==========
    assert total == 2.25

Test Edge Cases

Don't just test the happy path:

def test_divide():
    # Happy path
    assert divide(10, 2) == 5
    
def test_divide_by_zero():
    # Edge case: division by zero
    with pytest.raises(ZeroDivisionError):
        divide(10, 0)
        
def test_divide_negative_numbers():
    # Edge case: negative numbers
    assert divide(-10, 2) == -5
    
def test_divide_floats():
    # Edge case: floating point
    assert divide(5, 2) == 2.5

Test Doubles (Mocks, Stubs, Fakes)

Isolate the code under test:

# ========== Original Code with External Dependency ==========
def send_welcome_email(user_email):
    email_service = EmailService()
    return email_service.send(user_email, "Welcome!")

# ========== Test with Mock (no real email sent) ==========
from unittest.mock import Mock

def test_send_welcome_email():
    # Arrange
    mock_service = Mock()
    mock_service.send.return_value = True
    
    # Act
    result = send_welcome_email_with_service(
        "test@example.com", 
        mock_service
    )
    
    # Assert
    assert result == True
    mock_service.send.assert_called_once_with(
        "test@example.com", 
        "Welcome!"
    )

TDD Workflow Example

Building a simple calculator with TDD:

# ========== Test 1: Basic addition ==========
def test_add_positive_numbers():
    calc = Calculator()
    assert calc.add(2, 3) == 5

# Implementation
class Calculator:
    def add(self, a, b):
        return a + b

# ========== Test 2: Add negative numbers ==========
def test_add_negative_numbers():
    calc = Calculator()
    assert calc.add(-2, -3) == -5
# Already passes! No code change needed.

# ========== Test 3: Subtraction ==========
def test_subtract():
    calc = Calculator()
    assert calc.subtract(5, 3) == 2

# Implementation
class Calculator:
    def add(self, a, b):
        return a + b
    
    def subtract(self, a, b):
        return a - b

# ========== Test 4: Multiplication ==========
def test_multiply():
    calc = Calculator()
    assert calc.multiply(4, 5) == 20

# Implementation
class Calculator:
    def add(self, a, b):
        return a + b
    
    def subtract(self, a, b):
        return a - b
    
    def multiply(self, a, b):
        return a * b

Common Testing Frameworks

Python - pytest

# Install: pip install pytest

# test_math.py
def test_addition():
    assert 1 + 1 == 2

def test_subtraction():
    assert 5 - 3 == 2

# Run: pytest
# Run specific file: pytest test_math.py
# Run with coverage: pytest --cov

JavaScript - Jest

// Install: npm install --save-dev jest

// math.test.js
test('adds 1 + 2 to equal 3', () => {
  expect(add(1, 2)).toBe(3);
});

test('subtracts 5 - 3 to equal 2', () => {
  expect(subtract(5, 3)).toBe(2);
});

// Run: npm test

Python - unittest

# Built into Python, no installation needed

import unittest

class TestMath(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(add(2, 3), 5)
    
    def test_subtraction(self):
        self.assertEqual(subtract(5, 3), 2)

# Run: python -m unittest test_math.py

Benefits of TDD

  • Fewer Bugs: Catch issues early in development
  • Better Design: Writing tests first encourages modular, testable code
  • Documentation: Tests serve as living documentation
  • Confidence: Refactor safely knowing tests will catch regressions
  • Faster Debugging: Pinpoint exact failure location
  • Common Pitfalls

    Testing Implementation Details

    # ❌ BAD: Tests internal implementation
    def test_user_save():
        user = User("john")
        user.save()
        assert user._db_connection.execute.called  # Internal detail!
    
    # ✅ GOOD: Tests behavior
    def test_user_save():
        user = User("john")
        user.save()
        saved_user = User.find_by_name("john")
        assert saved_user is not None
    

    Writing Tests After Code

    # ❌ BAD: Code exists, writing tests after
    # You already know it works, so tests become formality
    
    # ✅ GOOD: Test drives the design
    # Test forces you to think about API before implementation
    

    Not Running Tests Frequently

    # ❌ BAD: Write many tests, run once
    git commit -m "Added 50 tests"
    
    # ✅ GOOD: Run after each test/implementation
    pytest  # Run after each RED step
    pytest  # Run after each GREEN step
    pytest  # Run after each REFACTOR step
    

    TDD Best Practices

  • Test names should describe behavior: test_empty_cart_has_zero_total() not test_cart_1()
  • Keep tests fast: Use mocks for external dependencies
  • Test behavior, not implementation: Focus on what, not how
  • One assertion per test: Makes failures easier to diagnose
  • Independent tests: Each test should run in isolation
  • Clean up: Use fixtures/setup to avoid test pollution
  • Quick Reference

    PhaseActionResult
    REDWrite failing testTest fails (expected)
    GREENWrite minimal codeTest passes
    REFACTORImprove code qualityTests still pass

    TDD Commands

    # Python (pytest)
    pytest                      # Run all tests
    pytest test_file.py        # Run specific file
    pytest -k "test_name"      # Run tests matching pattern
    pytest --cov               # Run with coverage report
    pytest -v                  # Verbose output
    pytest -x                  # Stop on first failure
    
    # Python (unittest)
    python -m unittest         # Run all tests
    python -m unittest test_file.TestClass.test_method
    
    # JavaScript (Jest)
    npm test                   # Run all tests
    npm test -- test_file      # Run specific file
    npm test -- --coverage     # With coverage
    npm test -- --watch        # Watch mode
    

    Resources

  • Test Driven Development: By Example by Kent Beck
  • pytest Documentation
  • Jest Documentation
  • Martin Fowler - TDD
  • Uncle Bob - TDD Laws
  • Topics

    TDDTestingSoftware EngineeringBest Practices

    Found This Helpful?

    If you have questions or suggestions for improving these notes, I'd love to hear from you.