This is a mostly platform independent template, most scripts are written for CMake, which is required as build system.
- Change project name in root
CMakeLists.txt
- Run CMake, e.g. via
cmake -B cmake-build-debug .
(automatically done by CLion) - Run
contests/add_contest NAME
- Then, either
- Download
samples-TASK.zip
files tocontests/NAME
and runcontest/NAME/load_tasks
, or - Run
contest/NAME/add_task TASK
- Download
- Write code (this is the important part)
- Run
ctest
in the tasks cmake binary directory (cmake-build-debug/contests/NAME/TASK
if configured according to step 2) to test it
contests/add_contest NAME
to create a new contestNAME
.This invokes
scripts/add_contest.cmake
to create a contest folder, usingtemplates/contest.cmake
andtemplates/contest/*
.contests/NAME/add_task TASK
to create a new taskTASK
in contestNAME
.This invokes
scripts/add_task.cmake
to create a task folder, usingtemplates/task.cmake
,templates/template.cpp
andtemplates/task/*
.contests/NAME/load_tasks
creates a task for eachsamples-TASK.zip
in contestNAME
and adds the samples contained in the zip file.This invokes
scripts/load_tasks.cmake
, which usesscripts/add_task.cmake
to create task folders.contests/NAME/TASK/add_sample NAME
creates a sample for the given task (bothNAME.in
andNAME.out
).This is just a bash script, but rather simple, so it should be easily portable.
Run ctest
in the cmake build directory corresponding to a task (in CLion: cmake-build-TYPE/contests/NAME/TASK
) to run all samples. Add --output-on-failure
for more detail (e.g. solution diff). Add -j 8
and/or --progress
if you feel like it.
Each time cmake
is run (the project is reloaded), ctest
tests are generated.
Each task receives a build test (as testing is performed via a script, the test runner does not have to be built).
For each SAMPLE
of the task, a test is created which runs the task executable with SAMPLE.in
as input and compares the output with SAMPLE.out
.
The test fails if:
- the execution does not finish within 5 seconds (configurable in
config.cmake
) - the executable exits with a non-zero exit code (usually a run error, error output is printed to console if using
--output-on-failure
) - the output does not match the desired output (wrong answer, diff output is printed to console if using
--output-on-failure
)
Program output is saved to SAMPLE.result
, diff output (if any) is saved to SAMPLE.result.diff
, error output (if any) is saved to SAMPLE.result.err
.
The sample tests are skipped if the build fails.
There are two test runner scripts,
perform_test.sh
for UNIX andperform_test.cmake
for other platforms.perform_test.sh
terminates itself withSIGSEGV
to makectest
outputException
instead ofFailed
to allow for a quick distinction between run errors and wrong answers.perform_test.cmake
does not have this capability, so both run errors and wrong answers are reported asFailed
.
perform_test.cmake
usesdiff
to compare outputs, this might need to be changed based on the setup.
perform_test.sh
usesdiff
,head
andwc
(although the latter two are not strictly required).
The simplest way to install this is to clone this repository.
Then you can add this repository as upstream remote (git remote add upstream REPO_URL
) and change the origin
to your repository.
If upstream
is set up, you can git pull upstream master
to update to the latest version.
The template is structured such that, if at all possible, new features also apply to existing tasks.
If you already have an existing repository, you can add this repository to its history:
- (optional) rename/move files you know will conflict
git remote add upstream REPO_URL
.git pull --allow-unrelated-histories upstream master
. This is most certain to result in conflicts, especially inCMakeLists.txt
. Mostly you can just pick the remote files in case of conflict, unless you know that you do not. If you feel daring, you can specify-s recursive -X theirs
to automatically pick remote files during merge.- Incorporate your existing files by creating the appropriate contests and tasks and copying the respective source files.