Subject: Re: Back to business [was: the Be thread]
From: Ian Lance Taylor <ian@airs.com>
Date: 25 Nov 1999 00:26:09 -0500

   From: "Stephen J. Turnbull" <turnbull@sk.tsukuba.ac.jp>
   Date: Thu, 25 Nov 1999 13:33:11 +0900 (JST)

   After this, I'll remove Steve from the CC list unless he wants to
   participate in the thread.

I left him off.

   Testing upstream involves doing things like reviewing designs and code
   as a team (doesn't scale), building and maintaining regression test
   suites (possibly scales), building various dummy modules for test
   purposes where the real things don't yet exist (doesn't scale, the
   more workers the more effort/worker is duplicated or needs to be spent
   avoiding duplicative effort), discussing and understanding APIs and
   protocols (maybe scales if revisions are few but involves spin-up
   costs for individuals, and is definitely a communications overhead),
   coordinating testing and integration schedules for systems test, etc.

gcc is a case you didn't obviously cover.  It's a highly portable
compiler.  The core development team does not have access to all the
machines on which gcc runs.  Using downstream testers is essential to
catch bugs which are specific to certain systems.  Note that a bug
which appears on a specific system can in fact be a generic bug with
weird triggering conditions.

I expect that the Linux kernel has similar characteristics when it
comes to testing on weird hardware.  Windows does too; you just don't
notice it because the hardware manufacturer, not the system
development team, does the testing and writes the driver (in fact,
that's an interesting case of a distributed development project).

A gcc bug report which includes a simple test case is ordinarily
folded into the regression test suite with little effort.  So that is
a case where building the regression test suite scales once the
required initial effort was made.

I don't have a point to make.  I'm just adding some random data.

Ian