Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DevSecOps : Writing Load Tests #17006

Open
emvaldes opened this issue Jan 7, 2025 · 8 comments
Open

DevSecOps : Writing Load Tests #17006

emvaldes opened this issue Jan 7, 2025 · 8 comments
Assignees
Labels
DevSecOps Team Aq DevSecOps work label documentation Tickets that add documentation on existing features and services platform-future Platform - Future Capabilities reportstream
Milestone

Comments

@emvaldes
Copy link
Collaborator

emvaldes commented Jan 7, 2025

Load testing involves simulating user behavior or workload patterns to evaluate the performance and scalability of a system. Below is a detailed framework on how to design, write, and execute effective load tests.

@emvaldes emvaldes added DevSecOps Team Aq DevSecOps work label platform-current Platform - Current Capabilities reportstream labels Jan 7, 2025
@emvaldes emvaldes added this to the todo milestone Jan 7, 2025
@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

Understanding the Basics of Load Testing

Goals of Load Testing

  1. Measure system performance under varying workloads.
  2. Identify bottlenecks, resource limits, and performance degradation.
  3. Validate system scalability to handle increased traffic or batch processing loads.

Key Metrics to Capture

  • Latency:
    • Average response time.
    • Tail-end latency (e.g., 95th and 99th percentile).
  • Throughput:
    • Number of requests or transactions processed per second.
  • Error Rate:
    • Percentage of failed requests.
  • Resource Utilization:
    • CPU, memory, disk IOPS, and network bandwidth usage.

Test Scenarios to Simulate

  1. Normal Load: Typical user traffic or batch workload.
  2. Peak Load: High traffic during peak usage hours or special events.
  3. Stress Test: Traffic beyond expected limits to find system breaking points.
  4. Soak Test: Sustained traffic over an extended period to detect resource leaks or stability issues.

@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

Designing Load Tests

Step 1: Define Test Objectives

  • Identify the system components to be tested (e.g., APIs, batch processing workflows).
  • Define key performance indicators (KPIs):
    • Maximum acceptable latency: <500ms.
    • Throughput: 10,000 requests/sec.
    • Error rate: <1% failures.

Step 2: Define Workload Patterns

  • User Behavior:
    • Simulate concurrent users sending requests.
    • Randomize user actions (e.g., GET, POST requests).
  • Batch Workloads:
    • Simulate varying batch sizes and submission rates.

Step 3: Define Success Criteria

  • Set thresholds for metrics like latency, throughput, and error rate.
  • Define pass/fail conditions for automated test pipelines.

@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

Writing Load Testing Scripts

A. Writing API Load Tests

API load tests simulate traffic to one or more endpoints.

Example: K6 Script for API Load Testing
import http from 'k6/http';
import { sleep, check } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 10 },  // Ramp up to 10 users
    { duration: '5m', target: 50 },  // Hold 50 users
    { duration: '1m', target: 0 },   // Ramp down
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'], // 95% of requests must complete under 500ms
    http_req_failed: ['rate<0.01'],   // Error rate <1%
  },
};

export default function () {
  const res = http.get('https://your-api-endpoint.com/data');
  check(res, {
    'status is 200': (r) => r.status === 200,
    'response time < 500ms': (r) => r.timings.duration < 500,
  });
  sleep(1);
}

@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

B. Writing Batch Processing Load Tests

Batch processing tests simulate the submission of varying batch sizes and track performance.

Example: K6 Script for Batch Processing
import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 20 },  // Ramp up to 20 users
    { duration: '5m', target: 100 }, // Hold 100 users
    { duration: '1m', target: 0 },   // Ramp down
  ],
};

export default function () {
  const payload = JSON.stringify({
    batchId: `batch-${__VU}-${__ITER}`, // Unique batch ID
    data: Array.from({ length: 2500 }, (_, i) => i + 1), // Batch of 2.5k items
  });

  const params = {
    headers: {
      'Content-Type': 'application/json',
    },
  };

  const res = http.post('https://your-api-endpoint.com/process-batch', payload, params);
  sleep(1);
}

@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

C. Writing Concurrency Tests

Concurrency tests simulate multiple users or processes interacting with the system simultaneously.

Example: K6 Script for Concurrency Testing
import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  vus: 50,  // Simulate 50 concurrent users
  duration: '10m', // Run the test for 10 minutes
};

export default function () {
  const payload = JSON.stringify({
    batchId: `batch-${__VU}-${__ITER}`,
    data: Array.from({ length: 1000 }, (_, i) => i + 1), // Batch of 1k items
  });

  const params = {
    headers: {
      'Content-Type': 'application/json',
    },
  };

  http.post('https://your-api-endpoint.com/process-batch', payload, params);
  sleep(Math.random() * 2); // Random interval between requests
}

@emvaldes emvaldes added documentation Tickets that add documentation on existing features and services platform-future Platform - Future Capabilities and removed platform-current Platform - Current Capabilities labels Jan 7, 2025
@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

D. Writing Complex Traffic Simulations

Simulate a combination of GET and POST requests with randomized payloads and intervals.

Example: Mixed Traffic Simulation
import http from 'k6/http';
import { sleep, check } from 'k6';

export let options = {
  scenarios: {
    steady: {
      executor: 'constant-vus',
      vus: 50,
      duration: '10m',
    },
    spike: {
      executor: 'ramping-vus',
      startVUs: 10,
      stages: [
        { duration: '2m', target: 100 }, // Ramp up to 100 users
        { duration: '5m', target: 100 }, // Hold at 100 users
        { duration: '1m', target: 0 },   // Ramp down
      ],
    },
  },
};

export default function () {
  const urls = [
    'https://your-api-endpoint.com/get-data',
    'https://your-api-endpoint.com/process-data',
  ];
  const payload = JSON.stringify({ key: 'value' });

  const res = Math.random() > 0.5
    ? http.get(urls[0])
    : http.post(urls[1], payload, { headers: { 'Content-Type': 'application/json' } });

  check(res, { 'status was 200': (r) => r.status === 200 });
  sleep(Math.random() * 2);
}

@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

Automating Load Testing

GitHub Actions Workflow

Automate the execution of load tests after deployments.

name: Load Testing Workflow

on:
  push:
    branches:
      - main

jobs:
  load-test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Install K6
        run: sudo apt-get update && sudo apt-get install -y k6

      - name: Run Load Test
        run: k6 run --vus 100 --duration 5m tests/load_test.js

      - name: Upload Results
        uses: actions/upload-artifact@v3
        with:
          name: load-test-results
          path: ./k6-results.json

@emvaldes
Copy link
Collaborator Author

emvaldes commented Jan 7, 2025

Reporting Load Test Results

Azure Monitor Integration

Log custom metrics (e.g., batch latency, error rate) to Azure Monitor for centralized analysis.

export default function () {
  const res = http.post('https://your-api-endpoint.com/process-batch', payload);

  // Log custom metrics
  const latency = res.timings.duration;
  const success = res.status === 200;

  // Log data to Azure (use SDK or custom logging integration)
  logToAzureMonitor({
    metricName: 'batch_processing_latency',
    value: latency,
    dimensions: { success },
  });

  sleep(1);
}

Power BI Reporting

  • Export metrics from Azure Monitor to Power BI.
  • Visualize metrics like latency, throughput, and resource usage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DevSecOps Team Aq DevSecOps work label documentation Tickets that add documentation on existing features and services platform-future Platform - Future Capabilities reportstream
Projects
None yet
Development

No branches or pull requests

2 participants