From SandboxWiki
Jump to: navigation, search

Contact: sandbox-developers at movial.com

Source code

Source browser is available at http://sandbox.movial.com/gitweb?p=mx11mark.git;a=summary

Introduction

Movial X11 Mark is a customizable tool for benchmarking X11 operations. It was created for the purpose of executing benchmarks that approximate the profile of real-life applications, instead of a synthetic set of tests that existing tools use. The sequence of benchmarks to run, as well as some overall parameters, is defined by a simple script file.

Details of operation

The following benchmarks are supported:

  • pixmap: Creates a pixmap and then frees it. Size is controllable.
  • picture: Creates a render picture and then frees it. The same pixmap is used as backing store in all iterations. Size is controllable.
  • rectangle: Fills a rectangle with a random color. Size and operator are controllable.
  • trapezoid: Composites a trapezoid with a fixed color. Size and operator are controllable
  • text: Renders text, either a user-defined string or a random string of specified length.

The tool needs to occasionally do an Xsync and check how much time has elapsed to run benchmarks for the specified time. Since Xsync causes extra overhead that adversely affects the result, the measurement can be taken in two steps. First a short (100ms) run is done with frequent XSync calls to determine the approximate speed. That information is used to do a longer run in a single batch.

A statistical analysis mode is also supported. This works similarly to the single-batch mode, but the measurement duration is divided into a fixed number of batches instead. An average is computed, and if at most 10% of the samples fall more than 5% away from the average, those are considered measurement errors and discarded. The average of the good samples is used as the score, and an error margin with 95% confidence (twice the standard deviation) is also computed.

At the end of a benchmark run, an output file is produced in CSV format. Each line contains the results of a single test, in the following order:

  • Raw score (operations per second)
  • 95% confidence margin (0 if not run in statistical mode)
  • weighted score
  • Performance in pixels per second (where applicable)
  • Test label (as specified in the script)

The last line is a combined, weighted score. It's computed as S = 1/(W1/T1 + W2/T2 + ... + Wn/Tn), where Tn is the lower boundary of the 95% confidence range for test n and Wn is its weight.

Benchmark script format

The benchmark scripts consist of three types of statements: assignments, labels and commands. Assignments control the parameters of the benchmarks. Labels set the benchmark title to be displayed in output. Commands trigger running the actual benchmarks.

The format is line based, and one line contains at most one statement. Whitespace is ignored at the beginning of the line, but not anywhere else. Lines with unknown content are ignored. Everything is case sensitive.

Assignments are of the form "key=value". The values stay in effect until assigned again. The following keys are defined:

  • time=msec: Time to run an individual benchmark for.
  • type={normal|single-batch|statistical}: How to measure the benchmark.
  • batch=count: Number of operations in one batch. For statistical mode, number of batches to run.
  • weight=num: Weight of the benchmark in the final score.
  • size=wxh: Size of the operation (for pixmap, picture, rectangle and trapezoid).
  • op={src|over|add}: Operator to use (for rectangle and trapezoid).
  • font=name: Font to be used in text benchmark as a !FontConfig font description string.
  • text=string: An exact string to use in text benchmark.
  • randomtext=length: Generate random strings of the given length for text benchmark.

Labels are of the form "name:". The name will be shown for all subsequent benchmark. It is suggested to use exactly one label per command. The order of assignments and labels is not significant.

Commands are of the form "run benchmark". The supported types of benchmarks are given above. This will run the requested benchmark with the values currently in effect.