The start-ups that are of the most interest
are necessarily exceptional and significantly
different from start-ups in the past,
successful or not. Thus, evaluating start-ups when
looking for the ones of most interest
is challenging, and evaluations
via simple, empirical patterns from the past
promise to select a lot of straw and miss
some golden needles.<p>Really the challenge here is common, nearly
standard, and a very old story that goes back
to nothing less than the <i>Mother Goose</i>
children's story "The Little Red Hen":
What the hen was doing was unusual and, therefore,
not in the experience of others. Thus,
no one would help her. But when she had
hot, fragrant loaves of bread freshly out of
her oven and eager, hungry, paying customers
lined up to buy, lots of people were ready to
<i>help</i>. But in the interim she had to work alone
with just her own evaluation, creativity, and determination. No doubt that story is in
<i>Mother Goose</i> because the situation
was both common and ancient.<p>What is needed are better means of evaluating
projects. For a special, relatively small,
collection of projects, there are such means,
highly polished, e.g., for grant applications
to NSF, NIH, and DARPA, similarly for
Ph.D. dissertation proposals, and also
for a huge range of US DoD projects, e.g.,
the SR-71, the F-117, GPS. Generally
these projects and their evaluations have
much better <i>batting average</i> than Silicon
Valley equity funded information technology
start-up projects.<p>Maybe what Silicon Valley is doing is making money,
and the YC $30+ billion is astoundingly impressive, but
one major success can be worth $300 billion, 10
times as much, so that we have to suspect that
better evaluations could lead to better returns.