Automated testing is a crucial aspect of modern software development. It ensures code reliability, maintainability, and helps prevent regression bugs. In the Python ecosystem, Pytest stands out as one of the most powerful, flexible, and widely-used testing frameworks.
Table of Contents
What is Pytest?
Pytest is a testing framework for Python that makes it easy to write simple and scalable test cases. Whether you’re testing small units of code or complex systems, Pytest offers a clean and expressive syntax with a rich ecosystem of plugins.
Key Features:
- Simple and readable test syntax
- Powerful fixture mechanism for setup/teardown
- Built-in support for parameterized testing
- Compatibility with unittest and nose
- Extensive plugin architecture
Installing Pytest
Installing Pytest is as simple as:
pip install pytest
pytest --version #version check
SQLWriting Your First Test
Create a file named test_sample.py:
def add(a, b):
return a + b
def test_addition():
assert add(2, 3) == 5
PythonRun all the test:
pytest
PythonPytest will discover any file starting with test_ or ending with test.py and automatically run all functions prefixed with test.
Run a specified test file
pytest path/test_c.py
SQLRun a specified test case within a test file
pytest path/test_c.py.py::test_function_name
SQLNaming Conventions
- Files should start with test_ or end with _test.py
- Test functions should start with test_
pytest auto-discovers all test files and test functions
Using Fixtures
- Fixtures provide a way to set up test dependencies and clean up afterward.
- Imagine you’re testing a function that needs some data or setup before the test runs. Instead of repeating the setup every time, you can define it once using a fixture, and then re-use it across multiple tests.
Simple Example
Let’s say you want to test something using a dictionary like this: {“name”: “Alice”, “age”: 30}
Without a fixture:
def test_name():
data = {"name": "Alice", "age": 30}
assert data["name"] == "Alice"
def test_age():
data = {"name": "Alice", "age": 30}
assert data["age"] == 30
PythonYou’re repeating the same dictionary in both tests — not ideal.
Now with a Fixture
You can define the common data once using a @pytest.fixture, and then use it in any test just by adding it as an argument:
import pytest
# Fixture that returns sample data
@pytest.fixture
def sample_data():
return {"name": "Alice", "age": 30}
# Use the fixture in your test by passing it as a function argument
def test_name(sample_data):
assert sample_data["name"] == "Alice"
def test_age(sample_data):
assert sample_data["age"] == 30
PythonParameterized Testing
You can run the same test with multiple sets of data using @pytest.mark.parametrize:
import pytest
@pytest.mark.parametrize("a,b,result", [
(2, 3, 5),
(1, 1, 2),
(0, 0, 0)
])
def test_add(a, b, result):
assert a + b == result
PythonMarkers
Markers are annotations you place on test functions using the @pytest.mark decorator. They can be:
- Custom labels for filtering (@pytest.mark.smoke, @pytest.mark.regression)
- Built-in markers for modifying behavior (@pytest.mark.skip, @pytest.mark.parametrize, etc.)
Common Buildin Markers
Marker | Description |
---|---|
@pytest.mark.skip(reason="...") | Skip this test |
@pytest.mark.skipif(condition, reason="...") | Skip if condition is True |
@pytest.mark.xfail(reason="...") | Expected to fail |
@pytest.mark.parametrize | Run test with different sets of inputs |
@pytest.mark.usefixtures("fixture_name") | Explicitly use a fixture |
@pytest.mark.filterwarnings | Filter warnings in this test |
Example
import pytest
import sys
import warnings
# ----------- Fixture -----------
@pytest.fixture
def setup_data():
print("\n[Fixture] Setup data")
return [1, 2, 3]
# ----------- skip -----------
@pytest.mark.skip(reason="Skipping because this feature is not ready")
def test_skip():
assert 1 + 1 == 2
# ----------- skipif -----------
@pytest.mark.skipif(sys.platform == "win32", reason="Does not run on Windows")
def test_skipif():
assert True
# ----------- xfail -----------
@pytest.mark.xfail(reason="Known issue: division by zero")
def test_xfail_division():
result = 1 / 0
assert result == 0
# ----------- xfail with strict -----------
@pytest.mark.xfail(strict=True, reason="Expected failure but this will pass")
def test_xfail_strict():
assert 1 == 1 # Will raise XPASS failure
# ----------- usefixtures -----------
@pytest.mark.usefixtures("setup_data")
def test_usefixtures_example():
print("[Test] Using setup_data fixture")
assert True
# ----------- filterwarnings -----------
@pytest.mark.filterwarnings("ignore::DeprecationWarning")
def test_filterwarnings():
warnings.warn("deprecated", DeprecationWarning)
assert True
# ----------- parametrize -----------
@pytest.mark.parametrize("a, b, result", [
(2, 3, 5),
(10, 5, 15),
(0, 0, 0),
])
def test_parametrize_addition(a, b, result):
assert a + b == result
PythonTo Run and See Results:
Use this command to see full details:
pytest -rxXs test_builtin_markers.py
Python- -r shows reasons for skips/xfails
- -x stops at first failure
- -s shows print output from fixtures/tests
Custom Marker
import pytest
@pytest.mark.my_marker
def test_data_migration():
assert True
pytest -m "my_marker"
PythonYou may get warning for custom marker
PytestUnknownMarkWarning: Unknown pytest.mark.my_marker
PythonRegister Your Custom Marker
Add the marker to your pytest.ini:
# pytest.ini
[pytest]
markers =
my_marker: my my_marker
my_marker2: my my_marker2
my_marker3 : my my_marker3
PythonWhat is Test Coverage?
Test coverage is a metric used in software testing to measure how much of your source code is tested by your test suite (e.g., unit tests, integration tests).
In short: Test coverage shows which parts of your code are exercised when tests run
How Is Test Coverage Measured?
Test coverage is measured as a percentage:
Coverage=(Total number of lines/Number of lines executed by tests)×100
PythonHowever, coverage isn’t only about lines of code. It can be measured in various ways:
Types of Test Coverage
Type | Description |
---|---|
Line coverage | Checks whether each line of code has been executed. |
Branch coverage | Checks whether all possible paths (like if-else) have been executed. |
Function coverage | Checks whether each function/method was called. |
Statement coverage | Similar to line coverage but more language-specific. |
Condition coverage | Checks if each boolean sub-expression was evaluated to true/false. |
Path coverage | Ensures all possible control flow paths have been tested. |
Tools to Measure Test Coverage
Language | Tool Name |
---|---|
Python | coverage.py , pytest-cov |
JavaScript | Istanbul , nyc , Jest |
Java | JaCoCo , Cobertura |
C#/.NET | dotCover , OpenCover |
Go | Built-in go test -cover |
Example in Python with pytest:
pytest --cov=your_module tests/
SQLMisconceptions
- High coverage ≠ High quality
- 100% coverage ≠ 100% tested behavior
- Coverage does not guarantee bug-free code
- Coverage should not be the only metric for test completeness
Test Coverage with pytest
When working with pytest (a popular testing framework in Python), you can measure test coverage using a plugin called pytest-cov, which integrates coverage.py with pytest.
Install Required Packages
pip install pytest pytest-cov
PythonThis installs:
- pytest for running tests
- pytest-cov to measure code coverage
Run Tests with Coverage Enabled
pytest --cov=your_package tests/
PythonExplanation:
- –cov=your_package: Tells pytest to measure coverage for this specific module/package.
- tests/: Folder where your test files are located.
Generate Detailed Coverage Reports
Terminal Summary (default): Gives you a quick glance at what’s covered.
HTML Report (more detailed)
pytest --cov=your_package --cov-report=html
PythonThis gives you:
- Color-coded line-by-line view of coverage.
- Clickable HTML UI to explore coverage file-by-file.
XML or JSON Report (CI integration)
pytest --cov=your_package --cov-report=xml
pytest --cov=your_package --cov-report=json
PythonUse these formats for:
- Uploading to tools like Codecov, Coveralls
- Automated pipelines and dashboards
Summary
Feature | Description |
---|---|
assert | Native assert statements |
Fixtures | Dependency injection system |
Parametrize | Test multiple inputs easily |
Markers | Tag and filter tests |
Plugins | Extend pytest functionality |
Coverage | Test code coverage with pytest-cov |