Running into a cute little problem, to do with initializing the test and benchmark programme.
The test and benchmark code is in a library – there’s a command line wrapper provided for convenience. The code is in a library so users on embedded platforms and the like can use the test and benchmark functionality.
The code in the library performs no allocations – the user passes in memory. The user could after all be an embedded platform with no malloc and just global arrays.
The library code is complex enough there needs to be some flexibility in memory allocation, so the user provided memory is the store which is placed into an API and that API offers malloc() like behaviour.
The test and benchmark code, being NUMA aware, needs an allocation on each NUMA node.
Asking the user to do the work to figure out his NUMA arrangement is quite onerous, though – and in fact the porting layer already has this code in it.
So what we really want it to get the topology info from the porting layer.
To do this though we need… some store from the user.
So it kinda looks like first we make a topology instance, and then the user uses this to find out about his NUMA layout and make allocs.
To make a topology instance though the user needs to know how much store to allocate – and that’s the cute little problem.
How do you write code which can either work and do its job, *or*, tell you how many bytes of store it will need to do its job?
Now if the function is “flat”, in that it needs no allocations to *find out* how much store it needs, then it’s straightfoward enough.
However, if the function is “deep”, and it needs allocations, as it goes along, to find out how much store it needs, then life is more complicated – in fact, what must happen is that the user calls the function repeatedly, until he gets the same result twice, and the user needs to pass in each time as much store as the function asked for the time before.
There are Windows functions like this.
Problem is… now it’s quite a bit of extra work, and I’m not sure I’m *getting* very much for all this.