This site is a community resource providing benchmark datasets and metrics to compare the performance of macromolecular modeling and design protocols. There are three main components to the site:
The website has been designed to be ideally viewed using a wide display (more than 1200 pixels wide) but has also been formatted for mobile devices.
Benchmark results can be viewed on the respective page. The benchmarks are grouped according to its general type e.g. design or structure prediction.
Each benchmark section describes the target problem, the associated datasets, and the currently included computational methods.
Results from benchmark runs are shown to indicate which method should perform best for each metric. The best metric values are highlighted in bold text. If benchmark results have an associated publication, this is indicated with this icon: . Clicking on this icon will open the publication in your browser. The researchers who contributed the results for a benchmark run can be viewed by hovering your mouse over the icon.
Finally, we display the important flags/parameters for each method used to allow users familiar with the methods to identify why results vary between different runs. The full command lines are available in the associated benchmark capture.
A benchmark capture is a downloadable archive containing the data and tools to allow users to run a benchmark on their computational resources. Where possible, we have tried to give the benchmark captures a similar directory structure. The captures contain detailed documentation on their use.
Each capture contains scripts to run the benchmark with at least one computational method on at least one type of cluster system. If you wish to contribute code to run the method on a different cluster system, please feel free to contact us.
The captures are version-controlled and freely available. The version control allows us to transparently update the datasets as more information (e.g. structural data) becomes available. Major revisions of the captures will be tagged as such to indicate major changes.
You can download the capture for a benchmark either by clicking on the button at the end of its section on the main page or else by using the "Benchmark captures" menu above.
We rely on the community to make this website as informative as possible. Please contact us if you wish to add any new data or have some feedback on how we can improve the site. If you have a method which performs well for a given benchmark or metric, please consider working with us so that we can help publicize those results.