Computer Language Benchmarks Game

The ComputerLanguageBenchmarksGame compares measurements of programs written in different programming languages - the measurements are CPU time used, elapsed time used, memory used, and gzip source code used. There are 4 sets of up-to-date measurements - 4 different OS/machine combinations.

Anyone can contribute new programs that implement some of the dozen tiny tasks shown on the benchmarks game website. About 2500 programs have been contributed. Programs are re-measured as and when new versions of the language implementations become available, and between new programs and new language versions there have been new measurements most every week.

The Help page provides basic information and FAQ answers

http://benchmarksgame.alioth.debian.org/help.php

Although the website is mostly bare facts about particular programs, categorised by programming language, there is one page of editorial comment

http://benchmarksgame.alioth.debian.org/dont-jump-to-conclusions.php


Quite unexpectedly, some of the tiny tasks shown on the benchmarks game website have become a resource for programming language researchers. For example -

Having a collection of programs that implement the same tasks in different programming languages is at least convenient. In Simon Peyton-Jones' pithy phrase - "flawed, but available".


Just as unexpectedly, the Go programming language distribution includes Go programs for the tiny tasks shown on the benchmarks game website, and those programs are compiled during install -

    --- cd ../test/bench
    fasta
    reverse-complement
    nbody
    binary-tree
    binary-tree-freelist
    fannkuch
    fannkuch-parallel
    regex-dna
    regex-dna-parallel
    spectral-norm
    k-nucleotide
    k-nucleotide-parallel
    mandelbrot
    meteor-contest
    pidigits
    threadring
    chameneosredux

Similarly, the Scala project "automatically run a quick and dirty benchmarking suite over all our code revisions, in order to detect sudden changes in performance as a result of changes to the code" http://www.scala-lang.org/node/360


Several of the tiny tasks had already been used as benchmarks and were adopted almost unchanged for the benchmarks game, for example - fannkuch (now fannkuch-redux) and binary-trees.

Similarly, many of the tiny tasks from the benchmarks game have been adopted by other projects - notably 10 of the 26 WebKit? SunSpider? JavaScript tests.


The benchmarks game sometimes seems a little like a Rorschach test for programmers - how they respond can be more interesting than the measurements. -- IsaacGouy?

"It has been said that Ruby is slow. The benchmarks from the Computer Language Benchmarks Game shows the various ruby implementations as ranking close to last. Benchmarks only tell a partial truth. The reality of the situation is that for the vast majority of situations the ruby interpreter is fast enough. Slow code is most likely your fault. In this article, I’ll be showing you a process by which you can take your slow ruby code and tune it to become as fast as it can be." p32 Carl Lerche, Write Fast Ruby: It’s All About the Science.

http://2009.gogaruco.com/downloads/Wrap2009.pdf


CategoryProgrammingLanguage, CategoryMetrics


EditText of this page (last edited December 28, 2012) or FindPage with title or text search