This site provides benchmark datasets and metrics to compare the performance of macromolecular modeling and design protocols. Our goal is to provide a resource where different computational methods can be compared on the same sets of input, using the best parameters for each method, to provide a fair comparison of the state of the art. This concept is based on long-established sites frequently used for driving optimization of computer languages.
Each benchmark below has an associated downloadable benchmark capture. These captures contain the input data necessary to run the benchmark as well as analysis scripts to analyze predicted results if provided in a specified format. Each capture also contains command lines and parameters to run the datasets using at least one computational method. Following the philosophy of the computer language benchmarks mentioned above, the command lines should reflect the best practice for each particular method and should be ideally contributed by a developer or experienced user.
In the sections below, we describe each benchmark - its purpose, application, and the currently considered datasets - and results from previous benchmark runs so that users can quickly gauge the performance of different methods. Relevant command lines are also provided to promote best practice for each method.
Our intention is to provide a dynamic resource. As new data become available, it may be appropriate to refine or expand certain datasets. The captures are hosted in version-controlled repositories, allowing us to update the contents and allowing users to track changes to the datasets or analysis. Major changes to repositories will be tagged and referred to in the text below.
The archives and benchmark results currently concern methods contained in the Rosetta macromolecular modeling software suite however we invite broader contributions to both. If you have a computational method which you would like to include here, please contact us for more information.