freelist scaling update

So, the freelist scaling new is not, alas, as good as it seemed. I rather – but not fully – in my delight rather jumped the gun.

The work in question is that of the elimination layer. A freelist pop cancels out a freelist push, so if two threads attempting those operations can somehow “meet up”, and exchange their operations, then the freelist head point (the point of memory contention, which slows everything to a crawl as processor counts rise) is relieved of those two operations.

I’ve implemented this as an array of cache lines. Rule of thumb is one cache line per thread using the freelist, but I’ve no proof for this being right.

Each cache line stores an array of freelist element pointers. They start NULL.

When a thread pushes, it round-robin picks a cache line in the elimination array, and then iterates through it. If it finds a NULL, it attempts to exchange its freelist element pointer (for the element it wants to push) with the NULL. If it succeeds, well, then it just handed over its element and is free to go.

If it fails, and keeps failing (or the pointer are already non-NULL, i.e. in use), so that it gets to the end of the cache line without handing over its element, *then* it falls back to pushing onto the freelist proper.

Popping is much the same. Round robin to the cache line, iterate, find a non-NULL then try to exchange it with NULL.

Now, what happened that got me so excited was basically a fluke – I had four threads running, each in a tight pop/push loop, and each one had ended up (I wasn’t using round-robin yet, but a fixed cache line per thread) popping and pushing from its own, separate cache line. Basically, it was as if each thread had its own freelist. Result – great performance – but it’s totally unrealistic.

I then dug into it, collected cache line hit/miss stats and so on, and once I’d sorted everything out, and had the threads doing round robin, with separate round-robin counters for push and pop, then we end up being on four cores about two, two and a half times faster than without the elimination layer – but it’s not scaling – and in fact we’re only this fast because the benchmark is particularly favoured.

The benchmark is doing a pop and then a push, in a loop. Pop and push have independent counters, both start at zero. Counters are per thread. The elimination layer begins half-full. So we pop – and we get an element. We’ve made a space. We then push. There’s *always* a space there, because we just made a space!

In fact, if we use round-robin but add a hash mixing function to it, we then find about 10% misses, and performance is still better but only by about 25%.

There’s still a few things I do not understand (or are bugs in my code somewhere – the benchmark app prolly). On a single core, *with* the elimintion layer is as fast as a spin lock, and without is about 20% slower.

On two cores on the same physical core, without hash mixing function, there’s a haromic such that each thread seems to be getting an elimination layer cache line to itself, although this might just be where one physical core knows what its logical cores are up to and so can optimize the atomic operations. With has mixing, performance drops to the same as one thread on two separate physical cores.

This is making me think somewhat of others ways to achieve the same end. I read about “scalable stack” recently, which is actually a list (can’t remember the details offhand – might be deferd updates to the list, kindafing).

In fact you could imagine a btree as a freelist, where popping gets a random leaf node, and pushing pushes to a random leaf node.

A hash mixer on the address of the element being pushed would be enough to take care of balance. The problem would be detaching leaf nodes – same problem as in lists – no detaching a node which has just had an element attached to it. Harris’ delete bit would deal with this, and no need for garbage collection because it’s a freelist.