Python Automated Testing Using Pytest: A Comprehensive Guide

Automated testing is a crucial aspect of modern software development. It ensures code reliability, maintainability, and helps prevent regression bugs. In the Python ecosystem, Pytest stands out as one of the most powerful, flexible, and widely-used testing frameworks.

What is Pytest?

Pytest is a testing framework for Python that makes it easy to write simple and scalable test cases. Whether you’re testing small units of code or complex systems, Pytest offers a clean and expressive syntax with a rich ecosystem of plugins.

Key Features:

  • Simple and readable test syntax
  • Powerful fixture mechanism for setup/teardown
  • Built-in support for parameterized testing
  • Compatibility with unittest and nose
  • Extensive plugin architecture

Installing Pytest

Installing Pytest is as simple as:

pip install pytest

pytest --version #version check
SQL

Writing Your First Test

Create a file named test_sample.py:

def add(a, b):
    return a + b

def test_addition():
    assert add(2, 3) == 5
Python

Run all the test:

pytest
Python

Pytest will discover any file starting with test_ or ending with test.py and automatically run all functions prefixed with test.

Run a specified test file

pytest path/test_c.py
SQL

Run a specified test case within a test file

pytest path/test_c.py.py::test_function_name
SQL

Naming Conventions

  • Files should start with test_ or end with _test.py
  • Test functions should start with test_

pytest auto-discovers all test files and test functions

Using Fixtures

  • Fixtures provide a way to set up test dependencies and clean up afterward.
  • Imagine you’re testing a function that needs some data or setup before the test runs. Instead of repeating the setup every time, you can define it once using a fixture, and then re-use it across multiple tests.

Simple Example

Let’s say you want to test something using a dictionary like this: {“name”: “Alice”, “age”: 30}

Without a fixture:

def test_name():
    data = {"name": "Alice", "age": 30}
    assert data["name"] == "Alice"

def test_age():
    data = {"name": "Alice", "age": 30}
    assert data["age"] == 30
Python

You’re repeating the same dictionary in both tests — not ideal.

Now with a Fixture

You can define the common data once using a @pytest.fixture, and then use it in any test just by adding it as an argument:

import pytest

# Fixture that returns sample data
@pytest.fixture
def sample_data():
    return {"name": "Alice", "age": 30}

# Use the fixture in your test by passing it as a function argument
def test_name(sample_data):
    assert sample_data["name"] == "Alice"

def test_age(sample_data):
    assert sample_data["age"] == 30
Python

Parameterized Testing

You can run the same test with multiple sets of data using @pytest.mark.parametrize:

import pytest

@pytest.mark.parametrize("a,b,result", [
    (2, 3, 5),
    (1, 1, 2),
    (0, 0, 0)
])
def test_add(a, b, result):
    assert a + b == result
Python

Markers

Markers are annotations you place on test functions using the @pytest.mark decorator. They can be:

  • Custom labels for filtering (@pytest.mark.smoke, @pytest.mark.regression)
  • Built-in markers for modifying behavior (@pytest.mark.skip, @pytest.mark.parametrize, etc.)

Common Buildin Markers

MarkerDescription
@pytest.mark.skip(reason="...")Skip this test
@pytest.mark.skipif(condition, reason="...")Skip if condition is True
@pytest.mark.xfail(reason="...")Expected to fail
@pytest.mark.parametrizeRun test with different sets of inputs
@pytest.mark.usefixtures("fixture_name")Explicitly use a fixture
@pytest.mark.filterwarningsFilter warnings in this test

Example

import pytest
import sys
import warnings

# ----------- Fixture -----------
@pytest.fixture
def setup_data():
    print("\n[Fixture] Setup data")
    return [1, 2, 3]

# ----------- skip -----------
@pytest.mark.skip(reason="Skipping because this feature is not ready")
def test_skip():
    assert 1 + 1 == 2

# ----------- skipif -----------
@pytest.mark.skipif(sys.platform == "win32", reason="Does not run on Windows")
def test_skipif():
    assert True

# ----------- xfail -----------
@pytest.mark.xfail(reason="Known issue: division by zero")
def test_xfail_division():
    result = 1 / 0
    assert result == 0

# ----------- xfail with strict -----------
@pytest.mark.xfail(strict=True, reason="Expected failure but this will pass")
def test_xfail_strict():
    assert 1 == 1  # Will raise XPASS failure

# ----------- usefixtures -----------
@pytest.mark.usefixtures("setup_data")
def test_usefixtures_example():
    print("[Test] Using setup_data fixture")
    assert True

# ----------- filterwarnings -----------
@pytest.mark.filterwarnings("ignore::DeprecationWarning")
def test_filterwarnings():
    warnings.warn("deprecated", DeprecationWarning)
    assert True

# ----------- parametrize -----------
@pytest.mark.parametrize("a, b, result", [
    (2, 3, 5),
    (10, 5, 15),
    (0, 0, 0),
])
def test_parametrize_addition(a, b, result):
    assert a + b == result
Python

To Run and See Results:

Use this command to see full details:

pytest -rxXs test_builtin_markers.py
Python

  • -r shows reasons for skips/xfails
  • -x stops at first failure
  • -s shows print output from fixtures/tests

Custom Marker

import pytest

@pytest.mark.my_marker
def test_data_migration():
    assert True



pytest -m "my_marker"
Python

You may get warning for custom marker

PytestUnknownMarkWarning: Unknown pytest.mark.my_marker
Python

Register Your Custom Marker

Add the marker to your pytest.ini:

# pytest.ini
[pytest]
markers =
    my_marker: my my_marker
    my_marker2: my my_marker2
    my_marker3 : my my_marker3
Python

What is Test Coverage?

Test coverage is a metric used in software testing to measure how much of your source code is tested by your test suite (e.g., unit tests, integration tests).

In short: Test coverage shows which parts of your code are exercised when tests run

How Is Test Coverage Measured?

Test coverage is measured as a percentage:

Coverage=(Total number of lines/Number of lines executed by tests​)×100
Python

However, coverage isn’t only about lines of code. It can be measured in various ways:

Types of Test Coverage

TypeDescription
Line coverageChecks whether each line of code has been executed.
Branch coverageChecks whether all possible paths (like if-else) have been executed.
Function coverageChecks whether each function/method was called.
Statement coverageSimilar to line coverage but more language-specific.
Condition coverageChecks if each boolean sub-expression was evaluated to true/false.
Path coverageEnsures all possible control flow paths have been tested.

Tools to Measure Test Coverage

LanguageTool Name
Pythoncoverage.py, pytest-cov
JavaScriptIstanbul, nyc, Jest
JavaJaCoCo, Cobertura
C#/.NETdotCover, OpenCover
GoBuilt-in go test -cover

Example in Python with pytest:

pytest --cov=your_module tests/
SQL

Misconceptions

  • High coverage ≠ High quality
  • 100% coverage ≠ 100% tested behavior
  • Coverage does not guarantee bug-free code
  • Coverage should not be the only metric for test completeness

Test Coverage with pytest

When working with pytest (a popular testing framework in Python), you can measure test coverage using a plugin called pytest-cov, which integrates coverage.py with pytest.

Install Required Packages

pip install pytest pytest-cov
Python

This installs:

  • pytest for running tests
  • pytest-cov to measure code coverage

Run Tests with Coverage Enabled

pytest --cov=your_package tests/
Python

Explanation:

  • –cov=your_package: Tells pytest to measure coverage for this specific module/package.
  • tests/: Folder where your test files are located.

Generate Detailed Coverage Reports

Terminal Summary (default): Gives you a quick glance at what’s covered.

HTML Report (more detailed)

pytest --cov=your_package --cov-report=html
Python

This gives you:

  • Color-coded line-by-line view of coverage.
  • Clickable HTML UI to explore coverage file-by-file.

XML or JSON Report (CI integration)

pytest --cov=your_package --cov-report=xml
pytest --cov=your_package --cov-report=json
Python

Use these formats for:

  • Uploading to tools like Codecov, Coveralls
  • Automated pipelines and dashboards

Summary

FeatureDescription
assertNative assert statements
FixturesDependency injection system
ParametrizeTest multiple inputs easily
MarkersTag and filter tests
PluginsExtend pytest functionality
CoverageTest code coverage with pytest-cov

Leave a Comment