TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to Build a Lean Startup, step-by-step

55 pointsby timothychungalmost 16 years ago

2 comments

swombatalmost 16 years ago
Excellent talk. It repeats many of the points on Eric's blog, but they're all very good points worth hearing again.<p>Here's my question, though. I'm really struggling with this one, and I think Eric Ries is aware of HN, so I'd really love an informed answer (hint hint).<p>I run a start-up, <a href="http://www.woobius.com" rel="nofollow">http://www.woobius.com</a><p>We are getting reasonable traffic levels for this niche industry, but are still at a fairly early stage. We do not have thousands of visitors a day, or thousands of users a day. Each user is influenced by all sorts of special circumstances, such as whether they're in a company that we've been talking to, whether they're an architect, an engineer, a project manager, etc... As far as I can tell, they are heterogeneous, each of them mostly unique.<p>Moreover, the line between signup and purchase is not so clear. My start-up's product is project- and company-based. People might use it every day yet never pay for it if one of their colleagues paid for it. That doesn't mean they're not a happy customer, it just means that, for example, they're at a point in their career where they're not directing projects or making purchasing decisions.<p>Users also differ in their usage patterns. Some of them use our application to send files. Others only to receive or download them. Again, the users can be sliced in many heterogeneous groups by which activities they favour.<p>In those conditions, I find it <i>extremely difficult</i> to devise A/B experiments that measure things against a productive end result ("$$$" to use the notation in this presentation). We do measure and learn, but the way we do this is by talking to our users, or standing over their shoulder and watching them use the application.<p>*<p>I'd love to be able to implement a more scientific approach to testing out new features, but it just doesn't seem practical to me, given the circumstances of my start-up.<p>If I <i>don't</i> slice the users into more homogeneous groups before doing the A/B testing, the results will, imho, be flawed because there might easily be more users of one kind in A than in B. If I <i>do</i> slice them, I'll end up with groups of 10-50 users, because of all those differences that I'll have to slice for. With such small numbers, individual circumstances will, in my opinion, have far more of an effect on usage patterns than whether or not I add a button somewhere.<p>*<p>So how do you apply this "A/B test every change" approach to such an environment? Especially since we do make many changes a day (though we deploy every few days), so letting each change sit around for a week to accumulate A/B users would severely slow down our progress.<p>Any advice would be most welcome.
评论 #623807 未加载
dawiealmost 16 years ago
I can't seem to view the webcast. Am I missing something?