I’ve been properly re-reading Dreppers white paper about DSOs.
Where I’m working on the simple CGI server for the mailing list (crazy – I’m sitting here writing a server around epoll so I can write a CGI) and for that I need another data structure library I threw together year or three ago (never published), libstds, which is just a collection of single threaded data structures.
So I need to tart this up a bit now – bring it to the the same presentation as liblfds. Thing is, Drepper’s paper has raised the question of how I’m going about versioning.
Drepper makes a pivotal point; the reason DSOs are used is that if we distribute security or bug fixes, we update the entire system by replacing a DSO, but if we statically linked, we have to recompile every binary using the library in question.
(BTW, I can’t use DSO internal versioning at all, because it’s Solaris and Linux only).
So this means that the APIs must remain constant between versions, do the new versions work as drop-in replacements of the old versions.
This has one problems; the concurrent use of multiple versions of the library.
I want this for two reasons. Firstly, when new versions of the library are released *not* for needed security or bug fixes, but for new functionality, if users adopt them, then they need to revalidate their code. I would rather existing code was completely unchanged – i.e. the *binary of the DSO is unchanged* – as this is the only way that effort is not required. Secondly, I want to be able to benchmark multiple versions of the library, in the benchmark programme (so users can see how performance changes over time).
Concurrent use of multiple versions requires that APIs differ by version. This means DLLs cannot be used as drop-in replacements.
Right now, where the API changes on every release (as it contains the version number) we get some of the code-sharing benefit of DSOs (everyone using the same version re-uses the DSO in memory, as opposed to being just the one DSO any everyone using it) but we do not get the linking benefits of DSOs (you *do* need to relink to use a new version, rather than simply replacing the DSO). We note though this last behaviour is somthing explicitly eschewed, as it requires revalidation.
At a pinch, if it were needed, I could handle multiple versions concurrently in the benchmark by code manipulation; after all, the benchmark code contains a copy of each of the earlier liblfds libraries. I can modify that local copy.
It certainly is the case that users normally expect a stable API, with the DSO changing behind the scenes. The fantasy here of course is that the authors of the DSO introduce no new bugs, unexpected behaviour, etc, such that revalidtion is not required.
What we see in fact is that software being software, i.e. extremely complex and so error prone, the point of maximum validity for an application is that where all of the dependencies (DSOs, etc) have the version used at the time the test suite was running (and even this of course is only true for the OS version, the hardware revisions, etc, etc).
Of course at that point how valid the application is depends then on the quality of the test suite.
In all things, there are factors which encourage, and factors which discourage, and in the end you get what you get.
So we see in the actions we can take we move from the point of maximum validity (the systems the test suites have been run on – system here in all its glory, DSO versions, hardware, OS, etc) to points further away from maximum validity, where we can see benefits in other domains for such moves (being able to easily distribute security and bug fixes by DSO updates).