The utility of this website is dependent both on the amount of available data and on the benchmarking
resources. We welcome users to submit the results of benchmark runs using their preferred methods so that
these may be presented for public view.
Note: In order that benchmark results can be independently verified, we will only publish results
from methods which are freely available, at least for academic use.
Submitting new benchmark run data
When submitting new data for display on the website please include the following information:
- any attributions e.g. who was involved in generating the data;
- the name of the software suite, if applicable;
- the name of the modeling method (and a URL, if applicable);
- a revision number/hash for the modeling method and a link to the repository, if publicly accessible;
- links to any relevant publications;
- notes of any modifications to the supplied datasets e.g. omitted benchmark cases;
- any command lines or flags used in the run;
- any special parameter/input files used in the run. These can be supplied using the file attachment field below;
- the values for the benchmark metrics created using the supplied analysis scripts. Please include any error ranges if possible.
We wish to publish the best-performing settings for each modeling method in order to provide fair comparisons
so while we will publish some worse performers (e.g.
to compare variations of a method), please include
at least one set of results which achieves the best results for the modeling method.
After submission, the webserver team will contact you for more information or to inform you when your data
has been published.