Book API Testing Framework
A robust, maintainable API test automation framework built with Python, pytest, and requests
Key Features
- Custom APIClient with built-in retry logic and comprehensive logging
- Reusable Validators for status codes, error messages, and response validation
- DRY Architecture using pytest fixtures and base classes
- Parallel Execution with pytest-xdist and worker-safe test data
- Rich Reporting with HTML and Allure integration
- Comprehensive Logging for all requests, responses, and assertions
- CI/CD Workflows: Automated
Code Analysis
andTest Execution
via GitHub Actions
Quick Start
- Install dependencies:
pip install -r requirements.txt
- Start your Book API server:
http://localhost:3000
- Run all tests:
pytest
- Run tests in parallel:
pytest -n auto --dist loadfile
- Run smoke tests quickly:
pytest -m smoke -n auto --dist loadfile
- Generate Allure report:
allure serve test-results/allure-results
- To generate a static Allure HTML
report:
allure generate test-results/allure-results --clean -o test-results/allure-report
Best Practices
- Parameterization: Use
@pytest.mark.parametrize
for edge cases - Traceability: All requests and assertions are logged with detailed info
- Dependencies: Test ordering managed with
pytest-dependency
- Retry Logic: Built-in handling for transient HTTP errors (429, 5xx)
- Parallel Safety: Unique test data per worker prevents collisions
Tech Stack
- Python 3.8+ - Modern Python features
- pytest - Advanced testing framework
- requests - HTTP library with session management
- urllib3 - Retry logic and connection pooling
- allure-pytest - Rich test reporting
- pytest-xdist - Parallel test execution
Pytest Markers
- smoke: Quick smoke tests for core functionality
- regression: Comprehensive regression tests for business logic
- negative: Error handling and negative scenario tests
Run specific test types:
pytest -m regression
Combine markers for flexible execution:
pytest -m "smoke or regression"
pytest -m "regression and not negative"
Parallel execution with markers:
pytest -m smoke -n auto --dist loadfile
Test Case Mapping & Result Collection
- Test Case Mapping: Each test function is mapped to a unique test case ID using
test-plan-suite.json
for traceability and reporting. - Result Collection: Test results are collected for each test case, including
outcome, duration (ms), and iteration details. Results are aggregated and written to
test-results/test-results-report.json
after each run, supporting both serial and parallel execution.
CI/CD & GitHub Actions
- Code Analysis: Automated linting and static analysis on every push and pull request.
- Test Execution: Runs all tests and generates reports for every push and pull request, including parallel execution and publishing Allure/HTML reports.
Azure DevOps Test Plan Integration
Below are sample screenshots showing how test results are posted and visualized in Azure DevOps Test Plans:



