Hi! I am Lyal Avery, founder of PullRequest (<a href="https://www.pullrequest.com" rel="nofollow">https://www.pullrequest.com</a>) - we’re currently in the YC S17 batch. PullRequest is offering code review as a service.<p>We built PullRequest to help developers. After waiting several days for feedback on a pull request while a colleague was on vacation, I knew there had to be a way to improve this process. Our mission is to improve code quality and save time for dev teams. We combine static and linting tools with real on-demand reviewers to help augment your current code review process. Dev managers like extra coverage, but our real intent is to free up developers to make better software more efficiently<p>We’re onboarding experts across a lot of different languages for this reason. Sometimes teams might only have one person working within a given framework/language – it can be difficult to get objective feedback before shipping to production if you’re working on an island.<p>All reviewers sign NDAs to protect your IP. We start with surface level reviews – complying with framework or language standards, algorithmic work, performance or other questions. Since our reviewers continue working on the same projects, they will also gain context for deeper reviews.<p>Looking forward to hearing your thoughts and feedback!
I am skeptical that this can work well.<p>Having deep understanding of the code in question is essential for a good code review. Not just the code under review, but the wider scope of the project. This helps spot architectural problems, inconsistencies, unearth hidden assumptions or assumption breakages, and the like.<p>Reviewing the code as a drive-by loses all of those benefits and boils down to focusing on the code at hand, coding style, nitpicks, and implicitly assuming the code fits well with the rest (enforcing consistent coding style and pointing out code smells is certainly useful, these however can be automated to some extent by linters and services like CodeClimate).<p>I have been a reviewer in hundreds of pull requests, and reviews I've done where I have been intimately familiar with the existing code base were consistently much better than the reviews I did as an outsider to the project - even when, knowing this, I spent a lot more effort on the reviews as an outsider.<p>The founders seem to recognize this (it's mentioned in the TC article) and mention pairing up reviewers with the same companies, but this IMHO will not be enough, unless these reviewers are basically on retainer and work regularly, and often, with the same company.<p>I'd love to be proven wrong, so good luck PullRequest team!
This looks like something that could catch on, especially if you're already compartmentalizing projects into libraries, that alleviates a lot of hesitation in sharing a codebase. It's good to see that NDA's are involved as a layer of protection.<p>There are things that a human can suggest that computers can't. Such as a refactoring suggestion.<p>Here are a few ideas:<p>- Consider adopting a standard like EditorConfig (<a href="http://editorconfig.org/" rel="nofollow">http://editorconfig.org/</a>) for reviewers to have compliant indentation out of the box<p>- For Enterprise packages: perhaps there can also be an opportunity to sub-contract out features and write tests?<p>- Consider experimenting internal CI tools (like as done in open source projects) to scan for obvious/low-hanging fruit automatically<p>- Scanning for / suggesting package updates<p>- Provide QA / audit for a large open source project for exposure<p>- Security auditing<p>Here are things that are good to hear:<p>- Static / Linting: things like vulture, flake8, etc. seem like a nice thing to stick to. It's good that these linters have configuration files to it
I am very skeptical about this service.
Aside from cosmetic changes (which should be automated anyway)
code reviews are better served by people who know intimately the problem we are trying to solve.
Some code could look pretty neat (and pass the review) but still overall would be a mistake to have it.
Roughly speaking, I think there are 3 aims for code review:<p>1. Style/consistency, re-use of existing code, utils, etc.<p>2. Architecture/design, how does this fit into the rest of the codebase, scaling concerns, how will the deploy work, will this have race conditions, etc.<p>3. Knowledge sharing with other members of the team.<p>Currently, it looks like this would satisfy half each of 1 and 2, but will miss the (possibly large) amount of context that people working on the project have. To be honest, I don't know how you solve that. How does a reviewer who lacks knowledge about the codebase spot a common pattern and know that another dev abstracted that out into a util a few weeks ago, for example.<p>I also wonder what could be done to address (3). I've seen the team I work on go from a place where everyone could review everything to a place where I can't review all the code that goes live, and particularly after time off, I can't really catch up. I'd love to see some sort of automated changelog of useful notes on what has changed. I'm not sure if this is possible, but summarising merged PRs, highlighting config changes, showing new utilities that have been added, etc, would be quite valuable.
Seems like a good idea, but I wonder about the true quality of the review? In my experience, only a true team member who's familiar with the project (i.e. has actually been working on it) can provide a quality code review. Beyond that, they're just looking at ways to optimize blocks or find weird bugs in non-breaking recursive lines...
I'm a huge fan of static analysis and code quality, and am really excited to see where this goes.<p>It would be nice to see a demo video before giving full access to my private repos.<p>> Pricing > Standard starting at $49 per month*<p>> * Billing is dependent on amount of meaningful change per month. $9 per user per month for static analysis.<p>This metric is pretty unclear. Does this mean hourly billing based on reviewer time? Are there tiers or an upper bound? Is there a different tier for open source? Is the pricing different for surface vs deep reviews?<p>As one of those weird people that thinks doing code reviews and managing code quality is really fun, if I wanted to become a reviewer, what's the vetting process like?<p>Can you elaborate on, besides involving humans, how the underlying service is different than Code Climate, Codacy, etc?<p>P.S. Found a small bug on your dev signup form which I reported on Twitter. It would be awesome to be able to help review PullRequest using PullRequest ;).
My suspicion is this:<p>All the issues someone with no familiarity of the code base or the problem could typically uncover are things that are prone to be automated away by software in the long run (or are already in the process of being automated).
I would love this as an individual when learning new languages on my own projects. I find it really hard to tell if I'm actually doing things the "right" way without talking to someone more experienced.
Awesome idea, just signed up to help out and review code! Is there an incentive / gamification system to reward strong reviewers so their reputation increases as they provide good feedback to companies?
<i></i><i>All reviewers sign NDAs to protect your IP.</i><i></i><p>How does your company back this up? What happens if one of your Developers violates this? Will you pay for the legal fees?
Do we expect them to provide feedback like "this algorithm is not right because XYZ" or "I fixed this algorithm to work correctly". Those are very different levels of service and I think defining exactly what someone should expect will really helps set expectations.<p>I also think that this seems absurdly cheap, and I can't imagine it scaling with quality reviewers. Would love to be wrong on this one.
I like this idea, it seems useful for all the ways described. My skepticism comes from the reviewers themselves. I think they will have a hard time attracting and keeping top talent who can provide high-quality reviews as such talent will want to be creating code, not only reviewing it. I'm not sure how they would resolve this.
Very interesting. What are your thoughts about independent developers using this as an education tool? It would be really nice to get external input on projects I'm using to teach myself new technologies and patterns.
I am very interested in a product like this even if for just individual use. Your pricing says $49/month depending on "meaningful" changes suggested? What does that mean?<p>Another good idea is for new programmers in a production environment just having an eye over their shoulders making sure they aren't making rookie language mistakes (simpler ways of doing etc.). Letting official company engineers focus on architectural/roadmap related issues.<p>This, to me, would be the fastest way to learn as well.
Dang. One of the most painful things to do in this field, is dig through someones code. I don't even like figuring out MY old code. I'm surprised reviewers are voluntarily submitting themselves to this torture :D Cool program though, hope it takes off for you.
What are the benefits of reviewers over automated testing?<p>My workflow (which I believe is pretty standard) is:<p>* Write code<p>* Verify that tests pass locally (including stylistic tests, linting)<p>* Submit pull request<p>* Pull request triggers build and tests on Travis<p>* If all tests pass on Travis, code is stylistically and functionally correct<p>* Merge pull request<p>How can human reviewers improve this workflow?
Great idea! I agree that $49/mo is a bit steep if targeting startups. Though at the same time, each PR could easily take an hour to review so it could get time consuming fast. Is there any free trial?
Congrats on building this product, guys. This tool is very interesting for startups that have only one developer and freelancers. However, a $49/month pricing may be quite expensive for these people.
You should edit the submission description to make <a href="https://www.pullrequest.com/" rel="nofollow">https://www.pullrequest.com/</a> a clickable link. I've seen that done for other Launch HN submissions.