Testing Strategy
The Ominis Cluster Manager uses a compartmentalized testing approach that separates tests by scope, speed, and dependencies. This enables fast development cycles while maintaining comprehensive coverage for production deployments.
Overview
The testing strategy is built on three core principles:
- Fast Feedback: Unit and API tests run in milliseconds, enabling rapid iteration
- Comprehensive Coverage: 100+ endpoints tested with error scenarios
- Infrastructure Testing: E2E tests validate Kubernetes pod lifecycle separately
All tests use production-ready defaults (e.g., longest-idle-agent strategy, 1000 max sessions) to catch configuration issues early.
Test Structure
The test suite is organized into four compartments, each serving a specific purpose:
Directory Structure
tests/
├── unit/ # Unit tests (no external dependencies)
│ ├── test_auth.py # API key authentication logic
│ ├── test_error_handlers.py # Error handling and response formatting
│ └── test_middleware.py # Middleware (CORS, Prometheus)
│
├── api/ # API contract tests (mocked backends)
│ ├── test_queues_comprehensive.py # Queue CRUD endpoints
│ ├── test_extensions_comprehensive.py # Extension management
│ ├── test_callcenter_comprehensive.py # Agents/tiers/members
│ ├── test_telephony_comprehensive.py # Call control endpoints
│ ├── test_campaigns_comprehensive.py # Campaign management
│ ├── test_acl_comprehensive.py # ACL management
│ ├── test_call_control_comprehensive.py # n8n call control API
│ ├── test_directory_comprehensive.py # FreeSWITCH XML-CURL
│ ├── test_ivr_n8n_comprehensive.py # IVR n8n integration
│ └── test_channels_comprehensive.py # Channel monitoring
│
├── e2e/ # End-to-end infrastructure tests
│ ├── test_01_health.py # Health/connectivity checks
│ ├── test_02_queue_lifecycle.py # Queue pod creation/deletion (K8s)
│ ├── test_03_acl_reload.py # ACL ConfigMap + FreeSWITCH reload
│ └── test_04_ivr_lifecycle.py # IVR pod creation/deletion (K8s)
│
├── integration/ # Multi-step workflow tests
│ ├── test_queue_workflow.py # Create queue → agent → tier → call
│ ├── test_extension_workflow.py # Create extension → register → call
│ └── test_campaign_workflow.py # Create campaign → contacts → monitor
│
└── helpers/ # Shared test utilities
├── endpoint_registry.py # Complete endpoint registry
└── schema_validators.py # Schema validation helpers
Test Types
Unit Tests
Purpose: Test individual functions/classes in isolation
- Speed: Very fast (<10ms per test)
- Dependencies: None (fully mocked)
- Coverage: Business logic, validators, utilities
- Location:
tests/unit/
@pytest.mark.unit
@pytest.mark.asyncio
class TestAPIAuthentication:
async def test_missing_api_key_returns_401(self, unauthenticated_client):
response = await unauthenticated_client.get("/v1/queues")
assert response.status_code == status.HTTP_401_UNAUTHORIZED
data = response.json()
assert data["code"] == "MISSING_API_KEY"
API Tests
Purpose: Test API contracts, schemas, and error handling
- Speed: Fast (10-100ms per test)
- Dependencies: Mocked (database, XML-RPC, Kubernetes)
- Coverage: Every endpoint has happy path + error scenarios
- Location:
tests/api/
@pytest.mark.api
@pytest.mark.kubernetes
@pytest.mark.asyncio
class TestQueuesListEndpoint:
async def test_list_queues_success(self, api_client):
response = await api_client.get("/v1/queues")
assert response.status_code == status.HTTP_200_OK
data = response.json()
assert "queues" in data
assert "total" in data
E2E Tests
Purpose: Test infrastructure integration (Kubernetes, FreeSWITCH)
- Speed: Slow (1-5 minutes per test with pod creation)
- Dependencies: Real infrastructure required
- Coverage: Pod lifecycle, ConfigMap changes, system readiness
- Location:
tests/e2e/
@pytest.mark.asyncio
@pytest.mark.timeout(300)
async def test_queue_pod_lifecycle(api_client):
# Create queue
create_response = await api_client.post("/v1/queues", json=queue_data)
assert create_response.status_code == 201
# Wait for pod ready
await wait_for_pod_ready("queue-sales", timeout=120)
# Verify pod exists
status_response = await api_client.get("/v1/queues/sales/status")
assert status_response.json()["pod_status"] == "Running"
Integration Tests
Purpose: Test multi-step workflows across endpoints
- Speed: Medium (100ms-1s per test)
- Dependencies: Real or mocked depending on workflow
- Coverage: User journeys and cross-cutting concerns
- Location:
tests/integration/
@pytest.mark.integration
@pytest.mark.asyncio
async def test_complete_queue_workflow(api_client):
# 1. Create queue
queue_response = await api_client.post("/v1/queues", json=queue_data)
queue_name = queue_response.json()["name"]
# 2. Add agent
agent_response = await api_client.post("/v1/agents", json=agent_data)
# 3. Create tier
tier_response = await api_client.post("/v1/tiers", json={
"queue": queue_name,
"agent": agent_data["name"]
})
# 4. Verify call can be placed
call_response = await api_client.post(f"/v1/telephony/originate",
json=originate_data)
assert call_response.status_code == 200
Backend Classification
Tests are tagged by backend type to track infrastructure dependencies and identify bottlenecks:
Marker Definitions
| Marker | Backend | Endpoint Count | Example Endpoints |
|---|---|---|---|
@pytest.mark.xml_rpc | FreeSWITCH XML-RPC | ~35 | /v1/telephony/originate, /v1/campaigns/{id}/start |
@pytest.mark.database | PostgreSQL | ~40 | /v1/extensions, /v1/agents |
@pytest.mark.kubernetes | K8s API | ~15 | /v1/queues, /v1/acl |
@pytest.mark.hybrid | Multiple backends | ~20 | /v1/freeswitch/directory, /v1/extensions/{id}/reload |
@pytest.mark.static | Configuration/health | ~10 | /health, /metrics |
Usage Example
@pytest.mark.api
@pytest.mark.xml_rpc # This test uses FreeSWITCH XML-RPC
@pytest.mark.asyncio
async def test_originate_call_success(api_client):
response = await api_client.post("/v1/telephony/originate", json={
"destination": "1001@default",
"caller_id_number": "5551234"
})
assert response.status_code == 200
Running Tests
Quick Commands
# Fast development loop (recommended)
pytest tests/api/ -v # All API tests (~10-100ms each)
pytest tests/api/test_queues_comprehensive.py -v # Specific router
# Run by test type
pytest tests/unit/ -v # Unit tests only
pytest tests/e2e/ -v --timeout=300 # E2E tests (slow)
pytest tests/integration/ -v # Integration workflows
# Run all tests
make test # Uses pytest -q
pytest # All tests with verbose output
Run by Backend Type
Filter tests by infrastructure dependency:
# Run only database tests
pytest -m database -v
# Run only XML-RPC tests
pytest -m xml_rpc -v
# Run only Kubernetes tests
pytest -m kubernetes -v
# Run hybrid backend tests
pytest -m hybrid -v
# Skip slow E2E tests
pytest -m "not e2e" -v
Makefile Integration
# Test targets in Makefile
make test # Run all tests (pytest -q)
make lint # Run linters (ruff + black)
make doctor # Environment checks
# Combined workflow
make lint && make test
Test Execution Flow
Coverage Strategy
Endpoint Coverage Goals
- ✅ 100% endpoint coverage: Every endpoint tested
- ✅ Error scenarios: 401, 404, 400, 409, 500
- ✅ 90%+ line coverage: On router files
- ✅ Schema validation: All fields validated
Coverage by Router
| Router | Endpoints | Happy Path | Error Cases | Status |
|---|---|---|---|---|
| Queues | 8 | ✅ | ✅ | Complete |
| Extensions | 12 | ✅ | ✅ | Complete |
| Callcenter Direct | 15 | ✅ | ✅ | Complete |
| Telephony | 25+ | ✅ | ✅ | Complete |
| Campaigns | 10 | ✅ | ✅ | Complete |
| ACL | 4 | ✅ | ✅ | Complete |
| Call Control | 13 | ✅ | ✅ | Complete |
| Directory | 1 | ✅ | ✅ | Complete |
| IVR n8n | 7 | ✅ | ✅ | Complete |
| Channels | 3 |