This is the beginning of a module to import and process benchmark
outputs. The module currently supports importing of a bench.out and
validating it against a schema file. In future this could grow a set
of routines that benchmark consumers may find useful to build their
own analysis tools. I have altered validate_bench to use this module
too.
* benchtests/scripts/import_bench.py: New file.
* benchtests/scripts/validate_benchout.py: Import import_bench
instead of jsonschema.
(validate_bench): Remove function.
(main): Use import_bench.
Add a new 'init' directive that specifies the name of the function to
call to do function-specific initialization. This is useful for
benchmarks that need to do a one-time initialization before the
functions are executed.
This patch adds an option to get detailed benchmark output for
functions. Invoking the benchmark with 'make DETAILED=1 bench' causes
each benchmark program to store a mean execution time for each input
it works on. This is useful to give a more comprehensive picture of
performance of functions compared to just the single mean figure.
This patch changes the output format of the main benchmark output file
(bench.out) to an extensible format. I chose JSON over XML because in
addition to being extensible, it is also not too verbose.
Additionally it has good support in python.
The significant change I have made in terms of functionality is to put
timing information as an attribute in JSON instead of a string and to
do that, there is a separate program that prints out a JSON snippet
mentioning the type of timing (hp_timing or clock_gettime). The mean
timing has now changed from iterations per unit to actual timing per
iteration.