Testing

PyPDFForm uses pytest for testing and coverage.py for test coverages. Tests can be run by simply executing:

coverage run -m pytest && coverage report --fail-under=100

Generate coverage report

To generate a coverage report, run:

coverage run -m pytest && coverage html

And the coverage report can be viewed by opening htmlcov/index.html in a browser.

Test breakdown

Each PyPDFForm test is different. However, there is a general paradigm that almost all tests follow.

In most cases, a test can be verbally summed up into three steps:

  • Define an expected PDF file that the outcome of the test should look like.
  • Execute a sequence of code using PyPDFForm to generate a PDF that should look like the expected PDF file.
  • Compare the PDF generated by the test with the expected PDF file.

Consider this example test:

def test_fill(pdf_samples, request):
    expected_path = os.path.join(pdf_samples, "sample_filled.pdf")
    with open(expected_path, "rb+") as f:
        obj = PdfWrapper(
            os.path.join(pdf_samples, "sample_template.pdf")
        ).fill(
            {
                "test": "test_1",
                "check": True,
                "test_2": "test_2",
                "check_2": False,
                "test_3": "test_3",
                "check_3": True,
            },
        )

        request.config.results["expected_path"] = expected_path
        request.config.results["stream"] = obj.read()

        expected = f.read()

        assert len(obj.read()) == len(expected)
        assert obj.read() == expected

The test starts by defining an expected PDF sample_filled.pdf:

expected_path = os.path.join(pdf_samples, "sample_filled.pdf")

The test then fills sample_template.pdf with a data dictionary using PdfWrapper:

obj = PdfWrapper(
    os.path.join(pdf_samples, "sample_template.pdf")
).fill(
    {
        "test": "test_1",
        "check": True,
        "test_2": "test_2",
        "check_2": False,
        "test_3": "test_3",
        "check_3": True,
    },
)

These two lines should almost always be included in every test to make updating old tests easier:

request.config.results["expected_path"] = expected_path
request.config.results["stream"] = obj.read()

Finally, the test compares the resulted stream from the test with the expected file stream:

expected = f.read()

assert len(obj.read()) == len(expected)
assert obj.read() == expected