└───test_and_benchmark └───libbenchmark └───inc └───libbenchmark libbenchmark_porting_abstraction_layer_operating_system.h
#define LIBBENCHMARK_PAL_TIME_UNITS_PER_SECOND( pointer_to_time_units_per_second )
#define LIBBENCHMARK_PAL_TIME_UNITS_PER_SECOND( pointer_to_time_units_per_second ) *(pointer_to_time_units_per_second) = NUMBER_OF_NANOSECONDS_IN_ONE_SECOND
This define is mandatory and the library cannot compile if it is not set.
The benchmarks, to be fair, must be accurate in their duration, so that each benchmark runs as closely as possible for the same length of time. This requires much better than the one second accuracy of time_t or the uncertainties of clock_t and so we turn to high resolution timers, which are typically backed by the clock running the processor's operating frequency.
Two macros are provided to access the high resolution clock. The first, LIBBENCHMARK_PAL_GET_HIGHRES_TIME, gets the current time. The value returned by this macro is correct but arbitrary. To convert it to seconds, we must also know how many units of the returned time equal one second, and that is the purpose of this macro, LIBBENCHMARK_PAL_TIME_UNITS_PER_SECOND.
Generally, operating systems present high resolution timers using the same two functions (get the current time / how many units per second), but Linux is different and normalizes everything to nanoseconds. As such, there is no function to determine how many time units pass in one second - it's simply the number of nanoseconds in one second (indeed, Linux is actually reporting time in nanoseconds, so conversion isn't required).