RankScience automates split-testing for SEO to grow organic search traffic for businesses. 80% of clicks from Google go to organic results and yet most companies don't know how to improve their SEO, or can't effectively measure their efforts to do so. Because of the scale of SEO and the constant change of both Google's ranking algorithm and your competitors' SEO campaigns, the only way to succeed in the long-run is with software and continuous testing.<p>We've built a CDN that enables our software to provide tactical SEO execution and run A/B testing experiments for SEO across millions of pages. Experiments typically take 14-21 days for Google to index and react to changes, and we use Bayesian Structural Time Series and Negative Binomial Regression models to determine the statistical significance of our experiments.<p>Our software is 100% technical SEO, and doesn't do anything black-hat, spammy, or anything related to link-building. One of our goals is to bring transparency and shed light on what is largely considered a shady industry, but is so important to so many companies' revenue and growth. In fact, If SEO didn't have such a bad reputation, we think someone else would have built this a long time ago.<p>SEO as an industry earned itself a stigma for being spammy: between buying links, creating low-quality pages stuffed with keywords and text intended for Google rather than humans, and the used car salesmen attitude that many SEOs have, many people have been conditioned to dismiss SEO as an invalid or illegitimate growth channel.<p>We're software engineers-turned-SEO's, who have previously consulted for dozens of companies on SEO, from YC startups to Fortune 500 companies like Pfizer. We previously shared our case study with HN, where we increased search traffic to Coderwall with one A/B test: <a href="https://www.rankscience.com/coderwall-seo-split-test" rel="nofollow">https://www.rankscience.com/coderwall-seo-split-test</a><p>Ask us anything! We'd love to answer any questions you have about SEO, A/B testing, and RankScience.
A few questions about your product (I run SEO and digital marketing at LendUp - YCW12). This is very, very cool.<p>* What types of optimizations does it do/do you test?<p>* I assume tests take a while to run, waiting for Google to re-index etc. Do you essentially monitor rank changes, assume that Google has re-indexed at that point, and use that as the data to optimize on? Or do you have some smart way to monitor for when Google re-indexes?<p>* I'm again assuming here - that a user will plug in a few variations of things to test, and let your software test them? Or are there automatic optimizations the software tries to make?<p>* How many tests can one do in a given time period, without confounding test & control variants? It seems like they would take a while to run?<p>* Is there a good way to control for non-technical/non-content-based changes (e.g. external, links)? For example we get hundreds of negative links pointed at us per week. Do we just hope/assume that's not the cause of rank changes?
Guys, The idea looks great. Congrats on the launch.<p>2 SEO questions:<p>1) a CDN means "thousands of websites on one IP address". Google doesn't like it <i>UNTIL</i> it knows, the IP belongs to a well-known CDN like cloundfront/cloudflare etc. Please comment?<p>2) An A/B test might look like "cloaking" to googles. How exactly do you run it? I assume, not in parallel, but "variation A <i>then</i> variation B" - correct? If yes, does it mean I have to leave the website UNTOUCHED for 21 days so the test results are not distorted by my other activities? (adding new content, internal links etc)
Looks great, congrats on the launch!<p>Two points of feedback:<p>(1) The idea of the product is clearly conveyed, but I'm confused on exactly how it works. The landing page mentions that title tags, headlines, meta-tags, etc get tweaked - exactly how is this done? Do I have to manually enter a bunch of alternative text, or are you using a big fancy thesaurus to switch out some key terms?<p>(2) How do you evaluate performance of the product? Solely through click rates, or by search rankings? How often do google search results get updated? In short, how do I know the product is working?
As somebody who wrote an article about A/B testing title tags in 2011 before it was cool [1], this is an awesome idea. I've talked about SEO with many companies and coming up with the proper title tags and meta descriptions alone is often worth so much traffic for such little effort (once you get past the upfront cost of running the tests).<p>However, I think a critical aspect of SEO is thinking about an entire site holistically. Not only because certain signals are site-wide, but because a key aspect of SEO is deciding what pages of your website are "good" for SEO and which ones aren't, and then focusing on making the "good" pages better and not worrying about the "bad" pages. Good and bad in quotes because it is often quite a bit of an art and not a science.<p>How does RankScience play into this? You've nailed the on-page stuff but is there any world in which RankScience is able to talk about a site holistically and recommend which types of pages and content seem to be working most effectively (and maybe even suggesting pages that could be removed/de-indexed)? Or do you leave that to SEO consultants and you just nail the hell out of the on-page stuff.<p>1: <a href="https://www.thumbtack.com/engineering/seo-tip-titles-matter-probably-more-than-you-think/" rel="nofollow">https://www.thumbtack.com/engineering/seo-tip-titles-matter-...</a>
Though using A/B testing to improve user experience or conversion rates is fine, I thought using A/B testing to reverse engineer the ranking algorithm was against the guidelines. Has this been updated?<p>From <a href="https://support.google.com/webmasters/answer/7238431?hl=en" rel="nofollow">https://support.google.com/webmasters/answer/7238431?hl=en</a><p>> Best practices for website testing with Google Search<p>> The amount of time required for a reliable test will vary depending on factors like your conversion rates, and how much traffic your website gets; a good testing tool should tell you when you’ve gathered enough data to draw a reliable conclusion. Once you’ve concluded the test, you should update your site with the desired content variation(s) and remove all elements of the test as soon as possible, such as alternate URLs or testing scripts and markup. If we discover a site running an experiment for an unnecessarily long time, we may interpret this as an attempt to deceive search engines and take action accordingly. This is especially true if you’re serving one content variant to a large percentage of your users.<p>Next to this, the advice is to use rel="canonical" to avoid duplicate issues with Googlebot crawling your variations. When using rel="canonical" this should not show you how a variation influences ranking.<p>> If you’re running an A/B test with multiple URLs, you can use the rel=“canonical” link attribute on all of your alternate URLs to indicate that the original URL is the preferred version. We recommend using rel=“canonical” rather than a noindex meta tag because it more closely matches your intent in this situation.
Just want to chime in here and share that I'm a very happy customer. Ryan, Dillon, and Chad are some super smart guys who deeply understand SEO. I use RankScience for 7 Cups and Edvance360 and I've seen a huge ROI. They always find time to meet with me and my other team members. The advice and feedback they provide is worth the cost of the service alone. Highly, highly, highly recommended!
I consult with startups and tech companies (recently: SurveyMonkey) on SEO, so this is super-interesting... especially the automated aspect, which is pretty novel in this space.<p>What are some of the specific types of automated tests that you run?
It seems like you integrate with the likes of Google (Webmaster Tools) and Cloudflare in ways not done before, so my assumptions here are probably a bit outdated.<p>Let's say you want to test a new title. You collect stats for a week, change the title, collect stats for another week. The two datasets are then compared. Am I close?<p>Does it mean that you can't AB test in parallel? If so, it's not optimal for time sensitive stuff like breaking news.
You guys mention < 25ms as the performance hit of having you guys in front of the client's application. Is there a more concrete SLA you guys provide? Do you have a sense of what that hit might cost in page rank (there's some theories out there that time to first byte is one of the features that search engines reward)?
I am bit wary of the idea, despite the fair intentions, but Google in particular tends to view anything remotely close to 'improving' search ranking using 3rd party 'software', as a threat to their own Ego-rithms.
I wish I could use a tool like this and know that it won't affect my rankings negatively, but I feel that Google will eventually punish sites that do a lot of A/B testing.
Hey guys! Congratulations on the launch! Very exciting!<p>We've been seeing some great results with SEO testing, and have recently had our biggest test result (in terms of revenue impact). Looking forward to hearing more of what you guys are up to. :)<p>I think the fact that DistilledODN, RankScience, Etsy, and Pinterest have all published SEO split-test results recently demonstrates the importance of this type of data-driven approach to SEO!<p>Best of luck with everything!<p>Tom, Distilled
(Disclaimer: I run the <a href="https://www.distilledodn.com/" rel="nofollow">https://www.distilledodn.com/</a> team)