SMR

Made the new SMR compile.

There’s an outstanding design issue, which is how to handle new threads, since they will not have the correct generation count.

I know basically what I’m going to do – there will be a flag, one of the bit in the generation count / flags word, which indicates a new thread, and it’ll get special handling.

MongoDB HID

Bringing on the MongoDB rage here.

I’ve just spent an hour – wasted an hour – due to really, really bad interface design, and a lack of documentation. Those are *bad* reasons to spent time on a problem.

The default port for “mongo”, the command line client, is 27017. However, if you have a replica set, your mongo servers are also running something called “mongos”, which handles replication, inter-node communication.

It turns out on the server I have mongos is running on port 27017, and MongoDB on port 27018 – and it also turns out if you fire up the command line client again a mongos server, *it will connect*… only the commands you then try and run, which work when you’re connected to a MongoDB, fail with error messages like “can’t use local on mongos”, and Googling for this finds nothing useful.

So, basically, there’s this fake mongodb-a-like server running on your system which you can connect to and issue commands to and they all fail with errors which don’t explain the problem (unless you already know what the problem is).

This remins me of the Samsung web-site. For quite a long time, they had *two* web-sites for drivers, one real and up-to-date, the other old and not maintained, and Googling would lead you to the old site, where you would then scratch your head, wondering where the hell the drivers were for your new phone.

This isn’t the only problem I’ve had with MongoDB. I came earlier to set up a replica set. The intructions on their site *do not work*. Just do not work. D O N O T W O R K. I had to Google the necessary to configure the set.

I would say MongoDB is quite a bit above average in terms of docs for an open source project. There is at least a doc page for every command, even if the content is minimal. This is why for production work I avoid all but the most mature open source projects. The actual experience of trying to use open source in general typically is problematic, and utterly outweighs any advantages you might be obtaining.

Linux’s email server config is C’s function pointer declaration syntax

The worst part of Linux is setting up email servers.

A long time ago, I set up postfix and dovecot for the first time. It took a week.

Now, years later, I want to add a new system user (“alerts”) and have him receive email.

The postfix part was fine – got it right first time and it worked; it’s just another virtual email address, after all.

The dovecot part?

Not so good. I’ve tried for an hour and I’ve *given up*. There’s an error about “permisson denied” and utterly unhelpful. Googling reveals half-a-dozen contradictory solutions, where those I did try made no difference whatsoever. It is not reasonably possible to figure out complex systems without error information or meaningful documentation, which is every single bloody Linux email server in one sentence.

SMR design flaw and improvements

So, the freelist rapid push/pop test, using SMR, revealed an SMR bug, which in turn revealed the changes I made to the SMR function for advancing the generation counter, such that that function became multi-threaded (rather than being a critical section bounded by CAS flag) were broken.

What I’ve come up with now instead is the idea that there is a “setting” phase, where threads set their SMR state flags, and then when a thread sees that the generation counter can be advanced, we then have a “clearing” phase, where threads calling the generation-advance function act not to check to see if the generation counter can be advanced, but rather they check to see if all the per-thread flags have been cleared, so that we’re then in the correct state for threads to start the work again of indicating the SMR state so we can again see if the generation counter can be advanced.

The whole point of all this is to ensure any single thread entering the check function can make forward progress – before, this was not the case. I think the design is sound, although I’ve not yet implemented, because…

…I’ve realised in the course of this there is a design flaw, and I’ve not yet resolved it. The handle idle threads, I have it so that when a thread enters a lock-free section, it sets a flag – “LOCK_FREE_IN_PROGRESS” – and lowers that flag on exit. Design flaw is that the code only uses a store barrier, so there’s no guaranteee that flag is visible to other threads (who have issued a load barrier) until an atomic operation is performed – which typically occurs in the act of *performing* the lock-free work being done, i.e. *after* we’ve read in and are using sensitive memory addresses.

Any real SMR has to support idle threads, so I have to think of a solution.

Two steps forward, one step backwards, the mantra of all lock-free design work 🙂

Some real actual work

Well wadda ya know – I’ve actually done some real actual more work today – about six or so hours. I’ve been working on the first real SMR test, making it work and getting a feel for how the new SMR API is to use. Found a design flaw in how SMR data structure cleanup was arranged and fixed it.

The basic use principles of the new SMR API seem to simpify out to something like; think only about your current thread, take it as read that anything you put into SMR won’t come out till you call SMR release processing, and remember that any given call to SMR release processing may release zero elements, but that sooner or later, calling SMR release processing will release elements, so you can often find the way to write code is to have your operation in a loop, which exits when the operation is successful, where in the loop you call SMR release processing if the operation failed.

Google! evil AND awful

I’ve had a little trouble recently emailing via my own server, so I had cause to try to resend one or two emails – I have a Gmail account, titled “Review Account”, which I use to publish reviews on Google Maps, and I thought to use this.

I logged into Gmail, then after literally ten minutes of hunting, including very very nearly being led into creating a G+ account without knowing it, I in my by then desperate clicking on anything at all stumbled across the method to change the account name and did so. (Along the way I discovered Google had added a new advertising option, which was set to “yes”, where my profile would be used to endorse products).

I then sent my emails. I then discovered they were still sent as “Review Account”. I presume it is in fact necessary to log out and then log back in, to have the name changed in the Gmail session.

All in all – “Google! evil AND awful”.

SMR design

So, I actually did a bit of Googling to see the literature for SMR. I wanted to see if I was missing something obvious. Probably I still am 🙂

However, it looks like I might actually have something minutely novel, so I figured I’d write it up a bit here.

As an aside, I’ve also now make the generation-advancing function safely multi-threaded, so callers don’t need to worry about that anymore, and you’ll still get forward progress, etc.

So, SMR design.

It’s epoch based.

There’s a main SMR state, and there’s one state per thread.

A thread registers its state with the main state, which has an atomic add-only list of thread states (which can be in the states active, available, retired, etc – it’s add-only, so callers can try to re-use an available state in the list; each thread state knows its NUMA node number, since we care a lot on the per-thread level about NUMA appropriateness).

Each thread state has a single state variable, which is used as a bitmask, so we can atomically (single CAS) modify all the state information in one operation.

The main state has a generation counter which begins at 0.

So, we begin operations – make a main state, each thread makes a per-thread state and registers with the main state.

All the thread begin idle.

Now a thread comes to perform an operation using lock-free operations; it calls a macro, “BEGIN_LOCK_FREE”, which non-atomically sets a bit in the per-thread state which indicates a read section is in progress, and issues a store barrier.

The thread then finishes the lock-free operations, and so calls a second macro, “END_LOCK_FREE”. This non-atomically clears the “read section in progress” bit, and sets a second bit, “exited read section”.

When a thread has removed an allocation from the (say) lock-free data structure and wants to reuse it, it places the allocation in a single-threaded list which is in the per-thread state, noting the generation count in the main state.

So, now we’re humming along – threads are entering and exiting read sections, threads are submitting elements for reuse.

Now what?

Now we come to the point where the user calls the function to try to advance the generation counter in the main state.

We iterate over the list of thread states in the main state. Now, each thread has two status bits we care about – one is a flag, which is raised when the thread enters a lockfree section and lowered when it leaves, the other is a flag which is raised when a thread exit a lockfree section; the threads themselves never lower this flag – it is lowered, and in every thread state, by this function we’re in now, which is trying to advance the generation counter, if and only if the generation counter is advanced.

What we need, to advance the generation counter, is that every thread has exited a lock-free section (which means all elements queued for reuse are safe) or has been idle (hasn’t entered a lock-free section at all, since the last scan, and so all elements queued for reuse are safe).

So; if we see the exited flag is set, we know a thread has been in and has exited a read section – we’re good.

If we see the active flag is set, and the exited flag is set, we’re still good.

However, if we see the active flag is set, and the exited flag is *not* set, then we’re screwed – we can’t advance the counter.

So, if we can’t advance the counter, we don’t, that’s that. We return.

If we can advance the counter, we do so – but now comes one final vital point.

So – we have threads queuing up elements for reuse. The generation counter begins at 0, so these elements have a generation count of 0. When we check to advance the generation counter, we need to know that every thread has been completely idle, or has exited a read section *after the element was submitted for reuse*. However, when we do get round to scanning the thread states, all we can see is that the exit bit has been set… so we know all the threads HAVE exited a read section, but we don’t know WHEN. It could have been (say) right after only the very first element was submitted – and then it might be one of the threads has been stuck in a long running read section all the time since then – which would mean only the first element was actually safe for reuse.

How do we handle this?

The answer is that when a thread comes to scan its release candidate list, comparing each elements generation counter with the current main state generation counter, elements are only released when the difference is greater than TWO, not ONE.

In other words – having done this first scan (and finding all threads have been idle or have exited), we do advance the generation counter *and we set the exited bit in each thread state to lowered* (this is vital, remember it) – but that does not mean the previous generation (0 in this case) can now be released. It cannot – for we do not know when each thread exited, so we cannot know which elements are safe to release. However, having lowered the exit bit, when we come to scan again, if we see *AGAIN* all threads have been idle or have exited a read section, THEN ALL THREADS MUST HAVE EXITED **AFTER** THE FINAL GENERATION 0 ELEMENT WAS SUBMITTED – which means generaton zero is now safe to release.

So we’re always a generation behind on releasing.

I’m thinking I must have missed an obvious way to simplify this – but I can’t see it. We have to know if a thread had entered a lockfree section (to detect idle threads). We have to know when all the threads have exited (so we can know to advance the generation counter). I mean, basically, the design is that threads indicate they’ve exited a lockfree section, and we notice this only whenever the user calls the generation counter advance function, so we can’t know when they exited, only that they have, so we have to have two rounds of every thread having exited to know the generation before last is clear and safe to release.

Inverted SMR

So, yeah, thought about it a bit.

What stops the generation counter from advancing past a given generation? a thread which is in a read section. So when a thread enters a read section, it posts in its per-thread state the current main state generation value, and then clears that when it exits the read section. When we come to release reuse candidates, we scan the per-thread states, pick up any busy threads’ posted main state generation value, and the lowest value of them is how far up to we can release reuse candidates.

Idle threads are always permissive – no need to have any extra house-keeping to detect them – and what’s nice is because we’re now reversed, it doesn’t matter if threads when posting read an older version of the main state counter – it just makes us less efficient, rather than breaking the system.

However, it still means the main state counter has to increment every time a thread enters a read section (doing only on reuse is no good – I think you end up needing a period where no threads are in read sections, to be able to release reuse candidates, otherwise on a busy system you end up always seeing threads are in read sections) and these increments do need to be atomic – if we lost a write, a thread would think it is in an earlier generation than it really is, so we could reuse elements not yet safe to reuse.

Basically, I’m barking up the wrong tree – for performance, all information which lets a scan to advance the generation counter has to be stored and maintained in the per-thread state, with only read access to the main state. Only the scan to advance the generation counter can write the main state.

This is how the current mechanism works.

New SMR design

So! I don’t think this idea works, because to make it work the performance would be too poor, but…

You have a main state, and a per-thread state.

Each per-thread state registers with the main state.

The main state holds a counter, the “current counter”, which begins at 0. Every time a thread adds an element to its reuse candidate list, it stores in the reuse candidate the value of the main counter, and atomically increments the main counter. (On a busy system, the contention on that counter would drive performance into the ground – however, there is a scalable counter design…)

There is another counter in the main state, the “safe counter”, which also begins at 0 – it begin safe to reuse elements up to this value.

Every time a thread exits a read section, it stores the current value of that main counter in its thread state.

When we come to check to see if we can advance the safe counter, we iterate over the thread states, looking at their counter (the value of the main state current counter when they last exited a read section), and find the lowest value of them all (ignore idle threads for a moment). We then advance the safe counter to this lowest value – i.e. this is the point after which not all threads have exited a read section, and so the reuse candidates cannot yet be reused.

To deal with idle threads, we keep a flag in the thread state, which is raised when the thread enters a read section and lowered when we check to advance the safe counter. If the flag is lowered, then the thread has been idle.

In conceptual terms, ignoring actual practial performance, the design has high fidelity with regard to knowing which elements can be reused. As each element has its own generation count, and we know to which generation count we’re safe to re-use elements, we reuse every possible element we can.

What strikes me now though is that perhaps another way is to invert the paradym; rather than assuming we can’t advance, and then checking to see how far it is safe to advance (and so having trouble with idle threads, since they are silent), what about assuming we can advance (and so idle threads naturally say the right thing), with blockers in the way for the points beyond which we cannot advance?

Have to think about this, might just be crazy in the first instance, will see.

SMR Redux

So, my previous post about a design flaw in SMR, was incorrect.

Where I’d not touched that code for a while, I was not fully understanding what was going on. In fact, the code knows that a thread has at any time in the past (since the most recent generation advance) exited a read section, and so is not dependent on checking at a moment when no threads are in read sections.

So, I’ve been working on the tests, getting them to work, and I’ve finally come to a test which is really using SMR in anger, and of course this reveals to you what it’s like to actually use the API – where that API has changed, and is now no longer called every now and then when entering/exiting a read section or submitting an element, but explicitly and manually by the user.

One inherent aspect of SMR, which is visible and awkward for the developer, is that it is never possible to guarantee that an attempt to advance the generation counter will work. Another thread which is inside a read section blocks the advance of the generation counter beyond its current value. Of course, read sections are by design intended to be extremely brief, so in practise it’s not an issue – but it does mean no guarantee.

One thing which has become clear is that the function call which attempts to advance the generation counter has to be multi-threaded – it is ugly and onerous to put upon the user the burden of ensuring this is only called by a single thread at a time.