Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infrastructure and test environment #11

Open
Sebitosh opened this issue Dec 27, 2024 · 0 comments
Open

Infrastructure and test environment #11

Sebitosh opened this issue Dec 27, 2024 · 0 comments

Comments

@Sebitosh
Copy link
Contributor

Objective

From a single utilitary, the whole test suite should be generated and run against a live infrastructure specially configured and launched for the run before closing. The utilitary should be simple to use with default parameters or configurations, but should also be able to take arguments.

Current features

For now, only the generate_rules.py script exists. The test environment is to be setup manually with the inclusion of the generated rules, and go-ftw is to be run manually with a new configuration file and with the generated tests as argument.

Proposition

We would create a mrts utilitary that would take the following steps:

  1. Generate test cases and rules based on MRTS yaml files using the existing generate_rules.py script
  2. Generate a mrts.load file, including the generated rules to be used in the modsecurity configuration
  3. By default, launch a pre-configured web server with ModSecurity as reverse proxy to a back end for testing purposes. For the test environment of V2, we would be using the apache2 utilitary. This requires a preset configuration present in the repository, that links apache2 to albedo as a reverse proxy and a modsecurity.conf file with the line Include absolute/path/to/mrts.load. The directory containing the infrastructure configuration could be passed as argument (to allow for usage with different setups, such as an environment for testing V3). The script should wait for the servers to be fully running before proceeding. Alternatively, this step could be skipped if the user already has a running server configuration, or if an alternate launching script is passed as argument.
  4. Launching the test suite with go-ftw on the given infrastructure, with a valid yaml configuration file and directory where the generated tests cases are stored.
  5. Checking the output of go-ftw, it should print to stdout a report message indicating if the regression test suite succeeded or failed, and if so wich test failed.

This mrts utilitary would be used like this:

mrts [-C mrts_config_file] [-D mrts_yaml_files] [-R generated_rules_dir] [-T generated_tests_dir] [-I infrastructure_config_dir] [-F go-ftw_config_file] [-V verbosity_level]

Default values for parameters (for usage with V2) would be located in a new configuration file, ./mrts-V2.conf, and be:

  • mrts_config_file: ./mrts.conf
  • mrts_yaml_files: ./config_mrts/
  • generated_rules_dir: ./rules/
  • generated_tests_dir: ./tests/regression/test/
  • infrastructure_config_dir: ./config_infra/apache2/
  • go-ftw_config_file: ./.ftw-mrts-apache.conf.yaml
  • verbosity-level: 1

Once the regression test cases have been written, integration with Github Actions for the ModSecurity repository will require the utilitary to be able to launch the infrastructure with the builded code from the current PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant