Subject: Re: Towards an FSB customer bill-of-rights (Re: RH in the news)
From: Lynn Winebarger <>
Date: Mon, 18 Feb 2002 13:23:03 -0500

    Hopefully this isn't too far off-topic.

On Monday 18 February 2002 13:31, Tom Lord wrote:
> 	3. It should be possible to rebuild, test, and install each
> 	   subsystem with fewer than 10 commands where the granularity
> 	   of "subsystem" operates at multiple scales (e.g., all of
> 	   user space, all of "/sbin", the program "ls").
        I'm not a fan of this particular notion of "subsystem".   More 
problematic is the subject of "testing".
> 	4. Inter-package dependencies should be minimized and carefully
> 	   documented.  The contents of the installed system should be
> 	   carefully audited and traceable in every case to the
> 	   particular source and build of their origin.
> 	5. Each customer should be provided all of the tools necessary
> 	   to cut their own custom distributions.
> 	6. In addition to being able to receive binary upgrades over
> 	   the net, customers should be able to identify the sources
> 	   they have in the vendor's public repository, query for
> 	   issues and patches related to those specific versions, and
> 	   update to more recent versions of the source.  The path
> 	   from the public maintainer's releases to a compatible
> 	   source component of the distribution should be as simple as
> 	   practical and well documented.
> 	7. It should be practical, easy, and supported for customers
> 	   to maintain local modifications to the sources while,
> 	   nevertheless, incorporating source-level upgrades from the
> 	   distribution vendor.
      I've given some thought to this.  #5 pretty much subsumes all the 
others AFAIC.   What would be required is a kind of super-CVS, a 
comprehensive patch selection and build system, a configuration control 
system, a test harness capable of controlling multiple machines, and a 
secure deployment system.
     Here's what I mean in more detail:
   (1) super-CVS. I use this in two senses: (a) for a given software 
package, it would track potentially multiple CVS repositories, as well 
as other sources (such as source rpms and mailing lists), while 
attempting to track the genealogy of patches in a hostile environment 
(genealogy means its applicability to/dependency on different versions 
as well dependency on/independency of other patches, with some 
reasonably accurate measure (obviously hard)).
(b) it tracks enough packages to constitute a working distro (but 
ideally there would be one big one of these that works on all available 
packages, potentially helpful to package maintainers).  The difference 
with CVS is that CVS is geared toward developers needs, whereas this 
system would be geared toward an outsider's needs in comparing 
different distributions.  
    (2) comprehensive patch selection/build system.  Here you would 
have a presentation of the sources for different versions of a package, 
including a way of looking at how they amalgamate in different 
distributions, comments on each patch, a "genealogy" on each patch, etc.
Then a way of selecting which ones you want (possibly just using a 
particular distro's in toto, mixing and matching, or making minor 
adjustments on existent patches) , and building/running basic tests on 
it, and putting the result in some kind of package management system 
(user's choice). 
     (3) configuration control system.   Here I mean both setting up 
and packaging specific configuration files for individual packages, and 
more broadly setting up a configuration for a complete running machine.
     (4) a multiple machine test harness.  Here you would have an 
isolated network, with a master machine and several slaves.  Given the 
previous work in configuration and building, you should be able to 
automatically deploy complete configurations on the different machines 
(from the kernel up) , reboot them and run live tests on the 
interactions (of both correctness and performance, and possibly to test 
your networking equipment).  Should provide tools for analyzing the 
     (5)  secure deployment system.  Here you should be able to take a 
set of complete configurations and deploy them on actual machines.  One 
way would be with a CD-burner and standard install-system (no choices 
necessary during installation).   There might be some reasonably secure 
way of doing this over a network, but I'm wary of my own ability to 
evaluate such schemes - also for automatic deployment you'd need to 
develop a system for making sure you save all the data currently 
running server process might have and back it up off of that machine.  
It might be doable with some ssh users and some way of getting the 
kernel to switch root file systems while leaving the real root fs 
available for manipulation.
      (6)   I didn't put these above, but you'd also want a way of 
tracking system integrity, including when configuration files were 
modified on the machine, and doing it securely from one station.  The 
only method I've come up with (using current tools) would be to have an 
ssh user on each system with access to some sudo commands,  and on the 
control system have an encrypted set of keys and databases, then in the 
interface take a master password (long and random of course) that would 
then use those keys to run tripwire or similar to gather results on the 
individual systems.   Also, keep stats on how each machine is running, 
       The main idea here is for a sysadmin to sit down at one console and 
develop, test, deploy, and track a custom distro(s) over many machines 
and not only feel they have control but be able to demonstrate that 
control to someone else in terms of looking at how their choices have 
impacted performance over time in real usage, and know exactly what is 
and was on their systems at a given period of time.
       I'm sure distributors have developed at least some of these 
tools for internal use.   I don't have this particular itch anymore, 
but I worked on building my own mini-distro long enough to have a 
pretty good idea of what I wanted.   The last bit could even go back 
into making better engineering choices at the maintainer level (i.e. 
hard evidence of what the software's doing in the wild).
        And I'm pretty sure this wouldn't qualify as making Linux into