So, two lines of work are in progress.

First, setting up every version of GCC, so I can compile, test and benchmark against them all.

Second, Safe Memory Reclamation.

With regard to GCC, as I’ve been progressing with this work, I’ve been learning more about the GCC/compiler eco-system. Stuff I vaguely knew, but which I did not particularly know, and now I am particularly learning about; there are in fact three pieces in this puzzle. There is GCC itself, then there is binutils (which provides the linker and assembler and GCC calls both when compiling C) and then there is libc. Each has their own – long – version history, with lots of releases. A canonical combination might be those versions which were available at the time a given GCC release was made – but this is by no means the and the only possible combination; I think there is a lot of scope for different vesions to be used together.

The script I’m putting together to generate all the GCC versions currently has two steps; build a given release with the local compiler, and then build with that newly build compiler itself, i.e. download 4.1.2, build with the installed GCC (4.9.2), then use the 4.1.2 I now have (which was built with 4.9.2) to build 4.1.2 again – so I have a 4.1.2 which was compiled by 4.1.2.

This means I’m using my local libc (2.19-18, compiled with GCC 4.8.4 on a Linux 3.16.7 system) and my local binutils (2.25).

I don’t actually myself care about libc, because liblfds itself does not use it; nor does libtest or libbenchmark (although they do use libpthread (huh – maybe a fourth piece in the puzzle). Only the test and benchmark veneers use libc, and they use it very lightly – printf and malloc, mainly. Now I think of it they also use libnuma.

I have something of a memory now about now mixing libc versions with different compiler versions. I need to check on this. If it’s true, then I HAVE to recompile libc for every GCC I use. That would then leave the question of whether I can use binutils across GCC versions.

So, basically, pulling on a thread here. I care most about compilation problems, since they are fatal to users, and any user compiling with a given compiler version is likely to be on a coherent system and so will have the appropriate versions of the other libraries and tools. However, I do rather care about getting a sane final compile, so I can benchmark across compiler versions. Actually, I think I also care about libc versions for benchmarking, because I benchmark locking-based mechanisms too.

So, some Googling indicates in theory you should only use a library with the compiler it was compiled with. This is because in theory each compiler has its own ABI. In practise the C ABI is very stable. Nevertheless, I actually want to compile all the different libpthread versions to see how their locking mechanisms vary in performance. (In fact, I’m using GCC 4.9 now with libc compiled by 4.8.4, which looks wrong!)

Also looks like binutils has a relationship to GCC and also libc – certain vesions needed for certain versions.

So; GCC, libc, libnuma, libpthread, binutils.

If I only care about compilation errors, I only need GCC. I don’t even need to bootstrap, really – it’s not very likely there will be compilation problems which occur only if the compiler is compiled with itself.

If I care about benchmarking, then I need the lot.

(More Googling – pthreads is part of libc, apparently – the libc compile I guess produing more than one library).

Moving on – second, Safe Memory Reclamation.

I’ve been trying to get my head around the core implementation of hazard pointers as presented by McKenney, here;


The three core functions are provided, and they’re only a few lines of code each.

I’ve seen the white paper, of course, from Magel. For me, it is utterly impenetrable. I understand – or think I understand – the concept of hazard pointers, and the concept is as simple and straightforward as the white paper is impenetrable. I don’t think much of the white paper.

The code itself – I didn’t get it. Probably part of the problem has been it’s out of context; when you’re dealing with memory barriers and so on, you often need to see what all the code is doing, to be able to follow the paths of execution, to see what gets flushed to memory when, and so on.

However, a day or so ago, I had an insight.

The code itself only issues memory barriers, so I couldn’t understand how it could work. When other threads came to check to see if the per-thread hazard pointers were pointing at something, how could they be guaranteed to have seen the values set by other threads?

Then it came to me – or at least, I think it came to me 🙂

The code is using the atomic operations performed *BY THE DATA STRUCTURE* to flush its writes to memory.

In my epoch-based SMR, on the entry to a read section, I’m issuing my own exchange to guarantee my status flag change it pushed to memory. In fact I need to find a way to use the atomic op performed by the data structure to do that job.

Oh – and there’s a third thing.

I read a post by Drepper, about static linking. He said – never do it. Always used shared objects – and he provided arguments which convinced me; indeed, there is one simple argument which is by itself enough. What happens if you have to issue a security fix? with a shared object, you replace the shared object. With static linking, you have to recompile all the dependent programmes.

With this thought in mind, it seems to me now I am going about versioning in the wrong way. When I release a bugfix, I bump the API name prefix (7.1.0 to 7.1.1), i.e. I never in fact release fixes to any already released libraries. That WAS the idea, in fact – but I’m going about it in the wrong way, because this way forces recompilation of the dependent programmes.