Subject: Re: Back to business [was: the Be thread]
From: "Stephen J. Turnbull" <turnbull@sk.tsukuba.ac.jp>
Date: Thu, 25 Nov 1999 13:33:11 +0900 (JST)

After this, I'll remove Steve from the CC list unless he wants to
participate in the thread.

>>>>> "Craig" == Craig Brozefsky <craig@red-bean.com> writes:

    Craig> Could you elaborate on how the increase in expense of doing
    Craig> review testing downstream versus testing and reviewing
    Craig> upstream is relevant to "Brooks law" of communications
    Craig> overhead as testers are added?

Testing downstream depends on the "enough eyes implies enough brains
behind them that at least one of them is foolish enough to do
something we never thought about" and thus elicit a bug report.
Beating on this horse with larger and larger numbers of testers costs
the core team nothing, as long as the testers are willing to donate
the time to test (if they're really testing and not simply using the
bleeding edge in production) and refine and write up bug reports
(which is a cost to the testers, but not to the core team).  On the
other hand, it runs into diminishing returns very quickly.  It
certainly will not generate "6-sigma" quality unless you're willing to
sacrifice years vis-a-vis the state of the art.  (I don't want to
reopen that thread, it's just the extreme that "more testers" can't
achieve.)

Testing upstream involves doing things like reviewing designs and code
as a team (doesn't scale), building and maintaining regression test
suites (possibly scales), building various dummy modules for test
purposes where the real things don't yet exist (doesn't scale, the
more workers the more effort/worker is duplicated or needs to be spent
avoiding duplicative effort), discussing and understanding APIs and
protocols (maybe scales if revisions are few but involves spin-up
costs for individuals, and is definitely a communications overhead),
coordinating testing and integration schedules for systems test, etc.

The argument that Steve McConnell made in the prepublication
discussion was that in today's best practice[1], most of the bugs are
shaken out upstream; the ones left for downstream are increasingly
expensive and less susceptible to "more eyes" methods.  This
_increases_ the cost advantage to applying resources upstream, so that 
eventually the testing costs are dominated by the upstream
methodology, which is subject to Brooks's Law.

I don't see a flaw in your reasoning about "compensating by using
downstream testers," except that I think it highly unlikely to achieve
truly high levels of quality---except where modularizability implies
high localizability of bugs, and thus efficient distribution of
testing, anyway.


Footnotes: 
[1]  Which he expects to be increasingly tomorrow's common practice,
of course lagging behind tomorrow's best practice.

-- 
University of Tsukuba                Tennodai 1-1-1 Tsukuba 305-8573 JAPAN
Institute of Policy and Planning Sciences       Tel/fax: +81 (298) 53-5091
__________________________________________________________________________
__________________________________________________________________________
What are those two straight lines for?  "Free software rules."