1. merge all data structure variants into a single API
2. write docs for all data structures being published in r7
3. write unit tests for all data structures being published in r7
1. compile and test on some extra platforms
I spent Sunday figuring out what to do about documentation with regard to the mediawiki. Mediawiki is surprisingly weak when it comes to manipulating pages in bulk; I can move pages around in bulk, using a script, but I can’t copy pages in bulk. Right now I have all pages in the “main” namespace and they are docs for release 6, which has been out for two years.
What I’ve done, since release 6 docs need to be kept, is move them to a new namespace, “r6″. I’m using “main” now for release 7 docs. The problem I have is that to re-create all the docs for release 7, since I can’t copy pages, only move them! this is like a hundred pages… so I don’t want to do that every time I make a release, especially not when I’m thinking of making releases more often, where the doc changes are likely to be small and where fixes should be backported to earlier recent releases.
So my idea is to use “main” for current and ongoing docs, e.g. where I’m making a rapid sequence of releases and things aren’t changing so much. But once a release has been out for a long time, and then I bring out a big new releaae, I’ll archive those docs into a new namespace and start with new docs for the new release.
I’ve also update the “download & license” page to have a table indicating for a given release which platforms it has been tested on/supports but is not tested on/does not support. This will allow be to issue releases which for example have only been tested on my dev machine (Windows 7, 64 bit, Windows SDK plus GNUmake).
So, liblfds.org has moved from a VM rented from datarealm.com to a VM in the Amazon Cloud.
I’ve taken a little while, but I’m getting there; bugzilla needs to be configured and the forum content needs to be brought over (if that’s possible, as I’ve moved from mySQL to Postgres).
Right now, the library interally has one API per data structure variant, e.g. dff_queue (double word CAS, freelist, patent free), ssp_queue (single word CAS, SMR, patent free), etc. I’m going to move this back to one API per data structure type, where the new() call indicates which variant to use.
As mentioned in a previous post, there was an outstanding question about SMR freelist and ringbuffer, in that they now used malloc/free, which was odd, since those data structures are used when you want preallocation. In fact, the use of SMR with these data structures gives you the ability to use them with single word CAS; you should still keep using pre-allocated memory. So that change has to occur.
Finally, I intend now to move to a more rapid release schedule. By this I mean to say I will when making a release indicate which platforms the release has been compiled and tested upon; this will allow me to release often for my normal development platform (Windows/x64) and then when the time comes to fully publish, then put the time and effort in to compile and test on the wide range of supported platforms.
Spent about half of today debugging slist.
First test is adding new elements to head. Made it work with a single thread – fixed about half a dozen bugs, general shakedown, where the code hasn’t been used at all before and is fairly complex. Then ran it with one thread per code – and it passed. Which means the test is broken; I suspect each thread is finishing before the next starts up, so there’s no or little real contention. Also, this particular test, which is basically a stack push, is largely avoiding all the complex code paths.
Still, it’s hours spent that need to be spent, making progress.
And I can prove it!
So, I now live in Stockholm.
I yesterday started work again. Something gentle to get back into the flow of things, I fixed a bug in ssf_stack and added the ssf_queue tests.
The big piece of work that remains is writing and making work the tests for ssp_slist.
Limbering up for that…