Finally got rid of the frames.
Moved the site back over to https (had time to sort out the certs from LetsEncrypt). There’s still a http server for the site, it redirects to https. I’m fairly sure everything is working.
Wordpress updated to v5 and the new editor is absolutely appalling, with no way to use the previous editor. It’s about unusable. It uses a tiny part of the screen for editing, the text and cursor keep being moved around when you perform operations in the UI, and the cursor disappears at times, not to mention the usual bizzaro-world totally unexpected jumping around when you’re doing editing and moving the cursor (that’s really the worst thing - the unpredictability and inconsistency of cursor movements - you have to think and pay attention to move the cursor, because it can’t be done on autopilot because the movements are not consistent). Answer is editing in a real text editor and just pasting here.
I’ve had to write a book, the last few months.
Another month or two to go.
Then back to working on the next release.
Working on the test and benchmark programme.
There’s a lot of functionality, so a lot of work.
Currently getting thread-based testing off the ground.
Making progress. Two main things to sort out - how to handle durations and memory allocation.
The standard library isn’t used, so there are no time functions.
Duration is number of iterations.
Problem is, tasks (tests and benchmarks) vary in how long an iteration takes, and will vary as the platform varies.
So some users will want less iterations, some more, and the number of iterations will still need to vary by task.
Memory allocation is a problem for the command line wrapper.
The test and benchmark code itself performs no allocations, but the caller has to.
The test and benchmark programme being general then needs to deal with NUMA and NUMA/shared memory, on Windows and Linux.
So, there’s quite a few variations of allocation methods.
Once that’s airborne, then process-based.
After that, then re-implementation of all tests, and also then a complete set of benchmarks.
At that point I can get back to actually coding lock-free data structures… =-)
(This Wordpress post was brought to you by Mousepad.)
Addendum - it’s even worse than I realised. You have to click “Publish” twice to publish.
So I’ve been banging away at getting the threading test/benchmark code working.
I had originally looked to use the Windows-type createprocess approach in Linux (you can do so) but this is wrong - the way Windows does this is more limited, and I can imagine use cases with Linux where thee more flexible fork() method will be -necessary-. So now Windows will use CreateProcess() (which needs a command line binary to execute) and Linux will use fork(), and that choice then defines the porting abstraction layer.
I also decided to move all memory allocation fully back out to the test/benchmark library caller - there was helper code in the library, but it’s just making life complicated and is potentially going to get in the way for some users.
So I’ve sorted out the fundamentals of the porting abstraction layer.
That leaves the duration problem - bare C89, so no timers. How do we know how long to run tests for? see previous post for details.
What’s just occured to me is to classify each test as slow or fast, and on the command line, the user can specify the number of iterations for each class, to over-ride the defaults.
It’s some complexity, but there has to be some user involvement because there is no way to measure time, so it’s impossible in the code to figure out how long to run a test for.
I’ve been working on the new test/benchmark programme.
It’s a lot of work. There’s a -lot- of code.
Problem is, the test code is not serious. All it does is run lots of threads doing lots of work and tries to make things go wrong - there’s no directed effort to cause unusual cases, it’s just run-and-hope - and there’s fairly limited checking you can do at the end of the test to see if data is correct.
It does find bugs, but that’s the most you can say.
I need and I’ve known I’ve needed for a long time a more formal, rigourous test.
This is not a straightforward problem to solve. There’s ongoing research work into how to do this in a timely and practical manner - which is to say, on unmodified or essentially unmodified source code, no translating the source code into a model for a modeller (since such work obviously leads to bugs).
I read a white paper just now which is fairly recent (2014) and what’s being done basically is enumeration of paths of execution, with culling for paths which are equivelent, with the compiler back-end being co-oped to notice when memory accesses occur (although I don’t know how it really knows, since what the hardware does is different to what the compiler does - OTOH, in theory, the world looks normal to a single thread so maybe that all works out, and you the processor ensuring the world looks normal is enough?), and then there’s a correctness checker which I’ve not yet read about, so I’m not sure how it works.
The code is on github, it’s for C++, but you know open source - it doesn’t work. I’m going to need to write my own.
Home Blog Forum Mailing Lists Documentation GitHub Contactadmin at liblfds dot org