11 KiB
ATS Load Testing Framework
This directory contains a comprehensive load testing framework for the ATS (Applicant Tracking System) application using Locust. The framework provides realistic user simulation, performance monitoring, and detailed reporting capabilities.
📋 Table of Contents
- Overview
- Installation
- Quick Start
- Test Scenarios
- Configuration
- Test Data Generation
- Performance Monitoring
- Reports
- Distributed Testing
- Best Practices
- Troubleshooting
🎯 Overview
The ATS load testing framework includes:
- Multiple User Types: Public users, authenticated users, API clients, file uploaders
- Realistic Scenarios: Job browsing, application submission, dashboard access, API calls
- Performance Monitoring: System metrics, database performance, response times
- Comprehensive Reporting: HTML reports, JSON data, performance charts
- Test Data Generation: Automated creation of realistic test data
- Distributed Testing: Master-worker setup for large-scale tests
🚀 Installation
Prerequisites
# Python 3.8+ required
python --version
# Install required packages
pip install locust faker psutil matplotlib pandas requests
# Optional: For enhanced reporting
pip install jupyter notebook seaborn
Setup
- Clone the repository and navigate to the project root
- Install dependencies:
pip install -r requirements.txt pip install locust faker psutil matplotlib pandas - Set up environment variables:
export ATS_HOST="http://localhost:8000" export TEST_USERNAME="your_test_user" export TEST_PASSWORD="your_test_password"
⚡ Quick Start
1. List Available Scenarios
python load_tests/run_load_tests.py list
2. Run a Smoke Test
# Interactive mode with web UI
python load_tests/run_load_tests.py run smoke_test
# Headless mode (no web UI)
python load_tests/run_load_tests.py headless smoke_test
3. Generate Test Data
python load_tests/run_load_tests.py generate-data --jobs 100 --users 50 --applications 500
4. View Results
After running tests, check the load_tests/results/ directory for:
- HTML reports
- CSV statistics
- Performance charts
- JSON data
📊 Test Scenarios
Available Scenarios
| Scenario | Users | Duration | Description |
|---|---|---|---|
smoke_test |
5 | 2m | Quick sanity check |
light_load |
20 | 5m | Normal daytime traffic |
moderate_load |
50 | 10m | Peak traffic periods |
heavy_load |
100 | 15m | Stress testing |
api_focus |
30 | 10m | API endpoint testing |
file_upload_test |
15 | 8m | File upload performance |
authenticated_test |
25 | 8m | Authenticated user workflows |
endurance_test |
30 | 1h | Long-running stability |
User Types
- PublicUser: Anonymous users browsing jobs and careers
- AuthenticatedUser: Logged-in users with full access
- APIUser: REST API clients
- FileUploadUser: Users uploading resumes and documents
Common Workflows
- Job listing browsing
- Job detail viewing
- Application form access
- Application submission
- Dashboard navigation
- Message viewing
- File uploads
- API endpoint calls
⚙️ Configuration
Environment Variables
# Target application host
export ATS_HOST="http://localhost:8000"
# Test user credentials (for authenticated tests)
export TEST_USERNAME="testuser"
export TEST_PASSWORD="testpass123"
# Database connection (for monitoring)
export DATABASE_URL="postgresql://user:pass@localhost/kaauh_ats"
Custom Scenarios
Create custom scenarios by modifying load_tests/config.py:
"custom_scenario": TestScenario(
name="Custom Load Test",
description="Your custom test description",
users=75,
spawn_rate=15,
run_time="20m",
host="http://your-host.com",
user_classes=["PublicUser", "AuthenticatedUser"],
tags=["custom", "specific"]
)
Performance Thresholds
Adjust performance thresholds in load_tests/config.py:
PERFORMANCE_THRESHOLDS = {
"response_time_p95": 2000, # 95th percentile under 2s
"response_time_avg": 1000, # Average under 1s
"error_rate": 0.05, # Error rate under 5%
"rps_minimum": 10, # Minimum 10 RPS
}
📝 Test Data Generation
Generate Realistic Data
# Default configuration
python load_tests/run_load_tests.py generate-data
# Custom configuration
python load_tests/run_load_tests.py generate-data \
--jobs 200 \
--users 100 \
--applications 1000
Generated Data Types
- Jobs: Realistic job postings with descriptions, qualifications, benefits
- Users: User profiles with contact information and social links
- Applications: Complete application records with cover letters
- Interviews: Scheduled interviews with various types and statuses
- Messages: User communications and notifications
Test Files
Automatically generated test files for upload testing:
- Text files with realistic content
- Various sizes (configurable)
- Stored in
load_tests/test_files/
📈 Performance Monitoring
System Metrics
- CPU Usage: Percentage utilization
- Memory Usage: RAM consumption and usage percentage
- Disk I/O: Read/write operations
- Network I/O: Bytes sent/received, packet counts
- Active Connections: Number of network connections
Database Metrics
- Active Connections: Current database connections
- Query Count: Total queries executed
- Average Query Time: Mean query execution time
- Slow Queries: Count of slow-running queries
- Cache Hit Ratio: Database cache effectiveness
Real-time Monitoring
During tests, the framework monitors:
- Response times (avg, median, 95th, 99th percentiles)
- Request rates (current and peak)
- Error rates and types
- System resource utilization
📋 Reports
HTML Reports
Comprehensive web-based reports including:
- Executive summary
- Performance metrics
- Response time distributions
- Error analysis
- System performance graphs
- Recommendations
JSON Reports
Machine-readable reports for:
- CI/CD integration
- Automated analysis
- Historical comparison
- Custom processing
Performance Charts
Visual representations of:
- Response time trends
- System resource usage
- Request rate variations
- Error rate patterns
Report Locations
load_tests/
├── reports/
│ ├── performance_report_20231207_143022.html
│ ├── performance_report_20231207_143022.json
│ └── system_metrics_20231207_143022.png
└── results/
├── report_Smoke Test_20231207_143022.html
├── stats_Smoke Test_20231207_143022_stats.csv
└── stats_Smoke Test_20231207_143022_failures.csv
🌐 Distributed Testing
Master-Worker Setup
For large-scale tests, use distributed testing:
Start Master Node
python load_tests/run_load_tests.py master moderate_load --workers 4
Start Worker Nodes
# On each worker machine
python load_tests/run_load_tests.py worker
Configuration
- Master: Coordinates test execution and aggregates results
- Workers: Execute user simulations and report to master
- Network: Ensure all nodes can communicate on port 5557
Best Practices
- Network: Use low-latency network between nodes
- Resources: Ensure each worker has sufficient CPU/memory
- Synchronization: Start workers before master
- Monitoring: Monitor each node individually
🎯 Best Practices
Test Planning
- Start Small: Begin with smoke tests
- Gradual Increase: Progressively increase load
- Realistic Scenarios: Simulate actual user behavior
- Baseline Testing: Establish performance baselines
- Regular Testing: Schedule periodic load tests
Test Execution
- Warm-up: Allow system to stabilize
- Duration: Run tests long enough for steady state
- Monitoring: Watch system resources during tests
- Documentation: Record test conditions and results
- Validation: Verify application functionality post-test
Performance Optimization
- Bottlenecks: Identify and address performance bottlenecks
- Caching: Implement effective caching strategies
- Database: Optimize queries and indexing
- CDN: Use content delivery networks for static assets
- Load Balancing: Distribute traffic effectively
CI/CD Integration
# Example GitHub Actions workflow
- name: Run Load Tests
run: |
python load_tests/run_load_tests.py headless smoke_test
# Upload results as artifacts
🔧 Troubleshooting
Common Issues
1. Connection Refused
Error: Connection refused
Solution: Ensure the ATS application is running and accessible
# Check if application is running
curl http://localhost:8000/
# Start the application
python manage.py runserver
2. Import Errors
ModuleNotFoundError: No module named 'locust'
Solution: Install missing dependencies
pip install locust faker psutil matplotlib pandas
3. High Memory Usage
Symptoms: System becomes slow during tests
Solutions:
- Reduce number of concurrent users
- Increase system RAM
- Optimize test data generation
- Use distributed testing
4. Database Connection Issues
OperationalError: too many connections
Solutions:
- Increase database connection limit
- Use connection pooling
- Reduce concurrent database users
- Implement database read replicas
Debug Mode
Enable debug logging:
export LOCUST_DEBUG=1
python load_tests/run_load_tests.py run smoke_test
Performance Issues
Slow Response Times
- Check System Resources: Monitor CPU, memory, disk I/O
- Database Performance: Analyze slow queries
- Network Latency: Check network connectivity
- Application Code: Profile application performance
High Error Rates
- Application Logs: Check for errors in application logs
- Database Constraints: Verify database integrity
- Resource Limits: Check system resource limits
- Load Balancer: Verify load balancer configuration
Getting Help
- Check Logs: Review Locust and application logs
- Reduce Load: Start with smaller user counts
- Isolate Issues: Test individual components
- Monitor System: Use system monitoring tools
📚 Additional Resources
- Locust Documentation
- Performance Testing Best Practices
- Django Performance Tips
- PostgreSQL Performance
🤝 Contributing
To contribute to the load testing framework:
- Add Scenarios: Create new test scenarios in
config.py - Enhance Users: Improve user behavior in
locustfile.py - Better Monitoring: Add new metrics to
monitoring.py - Improve Reports: Enhance report generation
- Documentation: Update this README
📄 License
This load testing framework is part of the ATS project and follows the same license terms.
Happy Testing! 🚀
For questions or issues, please contact the development team or create an issue in the project repository.