9.3 KiB
9.3 KiB
Recruitment Application Testing Guide
This guide provides comprehensive information about testing the Recruitment Application (ATS) system.
Test Structure
The test suite is organized into several modules:
1. Basic Tests (recruitment/tests.py)
- BaseTestCase: Common setup for all tests
- ModelTests: Basic model functionality tests
- ViewTests: Standard view tests
- FormTests: Basic form validation tests
- IntegrationTests: Simple integration scenarios
2. Advanced Tests (recruitment/tests_advanced.py)
- AdvancedModelTests: Complex model scenarios and edge cases
- AdvancedViewTests: Complex view logic with multiple filters and workflows
- AdvancedFormTests: Complex form validation and dynamic fields
- AdvancedIntegrationTests: End-to-end workflows and concurrent operations
- SecurityTests: Security-focused testing
3. Configuration Files
pytest.ini: Pytest configuration with coverage settingsconftest.py: Pytest fixtures and common test setup
Running Tests
Basic Test Execution
# Run all tests
python manage.py test recruitment
# Run specific test class
python manage.py test recruitment.tests.AdvancedModelTests
# Run with verbose output
python manage.py test recruitment --verbosity=2
# Run tests with coverage
python manage.py test recruitment --coverage
Using Pytest
# Install pytest and required packages
pip install pytest pytest-django pytest-cov
# Run all tests
pytest
# Run specific test file
pytest recruitment/tests.py
# Run with coverage
pytest --cov=recruitment --cov-report=html
# Run with markers
pytest -m unit # Run only unit tests
pytest -m integration # Run only integration tests
pytest -m "not slow" # Skip slow tests
Test Markers
@pytest.mark.unit: For unit tests@pytest.mark.integration: For integration tests@pytest.mark.security: For security tests@pytest.mark.api: For API tests@pytest.mark.slow: For performance-intensive tests
Test Coverage
The test suite aims for 80% code coverage. Coverage reports are generated in:
- HTML:
htmlcov/index.html - Terminal: Shows missing lines
Improving Coverage
- Add tests for untested branches
- Test error conditions and edge cases
- Use mocking for external dependencies
Key Testing Areas
1. Model Testing
- JobPosting: ID generation, validation, methods
- Candidate: Stage transitions, relationships
- ZoomMeeting: Time validation, status handling
- FormTemplate: Template integrity, field ordering
- BulkInterviewTemplate: Scheduling logic, slot generation
2. View Testing
- Job Management: CRUD operations, search, filtering
- Candidate Management: Stage updates, bulk operations
- Meeting Management: Scheduling, API integration
- Form Handling: Submission processing, validation
3. Form Testing
- JobPostingForm: Complex validation, field dependencies
- CandidateForm: File upload, validation
- BulkInterviewTemplateForm: Dynamic fields, validation
- MeetingCommentForm: Comment creation/editing
4. Integration Testing
- Complete Hiring Workflow: Job → Application → Interview → Hire
- Data Integrity: Cross-component data consistency
- API Integration: Zoom API, LinkedIn integration
- Concurrent Operations: Multi-threading scenarios
5. Security Testing
- Access Control: Permission validation
- CSRF Protection: Form security
- Input Validation: SQL injection, XSS prevention
- Authentication: User authorization
Test Fixtures
Common fixtures available in conftest.py:
- User Fixtures:
user,staff_user,profile - Model Fixtures:
job,candidate,zoom_meeting,form_template - Form Data Fixtures:
job_form_data,candidate_form_data - Mock Fixtures:
mock_zoom_api,mock_time_slots - Client Fixtures:
client,authenticated_client,authenticated_staff_client
Writing New Tests
Test Naming Convention
- Use descriptive names:
test_user_can_create_job_posting - Follow the pattern:
test_[subject]_[action]_[expected_result]
Best Practices
- Use Fixtures: Leverage existing fixtures instead of creating test data
- Mock External Dependencies: Use
@patchfor API calls - Test Edge Cases: Include invalid data, boundary conditions
- Maintain Independence: Each test should be runnable independently
- Use Assertions: Be specific about expected outcomes
Example Test Structure
from django.test import TestCase
from recruitment.models import JobPosting
from recruitment.tests import BaseTestCase
class JobPostingTests(BaseTestCase):
def test_job_creation_minimal_data(self):
"""Test job creation with minimal required fields"""
job = JobPosting.objects.create(
title='Minimal Job',
department='IT',
job_type='FULL_TIME',
workplace_type='REMOTE',
created_by=self.user
)
self.assertEqual(job.title, 'Minimal Job')
self.assertIsNotNone(job.slug)
def test_job_posting_validation_invalid_data(self):
"""Test that invalid data raises validation errors"""
with self.assertRaises(ValueError):
JobPosting.objects.create(
title='', # Empty title
department='IT',
job_type='FULL_TIME',
workplace_type='REMOTE',
created_by=self.user
)
Testing External Integrations
Zoom API Integration
@patch('recruitment.views.create_zoom_meeting')
def test_meeting_creation(self, mock_zoom):
"""Test Zoom meeting creation with mocked API"""
mock_zoom.return_value = {
'status': 'success',
'meeting_details': {
'meeting_id': '123456789',
'join_url': 'https://zoom.us/j/123456789'
}
}
# Test meeting creation logic
result = create_zoom_meeting('Test Meeting', start_time, duration)
self.assertEqual(result['status'], 'success')
mock_zoom.assert_called_once()
LinkedIn Integration
@patch('recruitment.views.LinkedinService')
def test_linkedin_posting(self, mock_linkedin):
"""Test LinkedIn job posting with mocked service"""
mock_service = mock_linkedin.return_value
mock_service.create_job_post.return_value = {
'success': True,
'post_id': 'linkedin123',
'post_url': 'https://linkedin.com/jobs/view/linkedin123'
}
# Test LinkedIn posting logic
result = mock_service.create_job_post(job)
self.assertTrue(result['success'])
Performance Testing
Running Performance Tests
# Run slow tests only
pytest -m slow
# Profile test execution
pytest --profile
Performance Considerations
- Use
TransactionTestCasefor tests that require database commits - Mock external API calls to avoid network delays
- Use
select_relatedandprefetch_relatedin queries - Test with realistic data volumes
Continuous Integration
GitHub Actions Integration
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, 3.10, 3.11]
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest pytest-django pytest-cov
- name: Run tests
run: |
pytest --cov=recruitment --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
Troubleshooting Common Issues
Database Issues
# Use TransactionTestCase for tests that modify database structure
from django.test import TransactionTestCase
class MyTests(TransactionTestCase):
def test_database_modification(self):
# This test will properly clean up the database
pass
Mocking Issues
# Correct way to mock imports
from unittest.mock import patch
@patch('recruitment.views.zoom_api.ZoomClient')
def test_zoom_integration(self, mock_zoom_client):
mock_instance = mock_zoom_client.return_value
mock_instance.create_meeting.return_value = {'success': True}
# Test code
HTMX Testing
# Test HTMX responses
def test_htmx_partial_update(self):
response = self.client.get('/some-url/', HTTP_HX_REQUEST='true')
self.assertEqual(response.status_code, 200)
self.assertIn('partial-content', response.content)
Contributing to Tests
Adding New Tests
- Place tests in appropriate test modules
- Use existing fixtures when possible
- Add descriptive docstrings
- Mark tests with appropriate markers
- Ensure new tests maintain coverage requirements
Test Review Checklist
- Tests are properly isolated
- Fixtures are used effectively
- External dependencies are mocked
- Edge cases are covered
- Naming conventions are followed
- Documentation is clear