"Figure 4 shows the result of an experiment we ran to test rendering with JavaScript for better web performance. . . [The test failed and] even after we turned off the experiment, it took almost a month for pages in the enabled group to recover..."<p>This is the incredibly frustrating part of White Hat SEO w/Google because:<p>(a) At nearly every SEO conference, by Google's own actions in releasing Page Speed Insights, and by Matt Cutts engaging and talking about page speed being an important indicator it seems like speed is pretty !@$!@% important to Google and worth pouring resources into. (1) (2)<p>(b) As a result a well equipped and well connected organization like Pinterest launches tests designed to improve this important signal. I'm going to assume they're organizationally smart enough to not damage or ignore other important Google signal ranks like usability, time on site, etc that you have to balance w/the JS page speed test.(3)<p>(c) Google penalizes them.<p>WTF!<p>My frustration as a customer acquisition guy - encompassing CRO / SEM / SEO / etc - is that I try to discuss and push best practices for my own projects, for clients, and for public facing blogs / presentations / etc.<p>I get that they don't want people gaming / pushing - but when they push out a "best practices" methodology like page speed, and then execute a penalty as described by Pinterest, I just want to throw my hands up.<p>(1) <a href="https://developers.google.com/speed/pagespeed/insights/" rel="nofollow">https://developers.google.com/speed/pagespeed/insights/</a><p>(2) <a href="http://www.webpronews.com/today-on-the-matt-cutts-show-page-speed-as-a-ranking-factor-2013-08" rel="nofollow">http://www.webpronews.com/today-on-the-matt-cutts-show-page-...</a><p>(3) I'm going to add the disclaimer of not having seen the JavaScript that pinterest used and perhaps they're not properly weighting / aware of other important SEO signals that GOOG penalizes when using Javascript, but I'm sure they are. Happy to answer more on that directly via my profile or this thread.
Unfortunately this only reinforces the mystical nature of SEO.
1. Why does Webmaster Tools tell you duplicate titles are a problem but changing them has no impact?
2. Why does repeating pin descriptions improve traffic drastically when we're told not to duplicate content?
3. Why do some changes have a lingering impact while others revert back to pre-change behavior?<p>That said, I applaud the scientific approach to coping with the black box.
Think they could have had more fun experimenting with image indexing. Say you Google a nice location, for example: Edinburgh, Scotland. On that SERP there is two locations for images, the 5th and the knowledge box to the right has a map and image.<p>The first image in the 5th position image area is Wikipedia (hard to beat that), but the last three are local blogs and Flickr (easier to beat). The very last image is the same image used in the knowledge box which sits nicely in eye line with the 1st position SERP link.<p>After a quick bit of detective work I've found that in Google Images, Pinterest link back to their page but are not the image source.<p>Type into Images search: site:pinterest.com intitle:Edinburgh, Scotland<p>Back to the Edinburgh, Scotland SERP and looking at those images in the 5th position we can see that all images are both the page and source.<p>We can use the Flickr image that's third in the 5th position image area as grounds to warrant even a small experiment to test if the theory is correct. The theory being that if Pinterest was the page and source could they see a benefit from it reflected in their organic search traffic.<p>What Pinterest lack is content, which they stated in the post. What they don't lack is images and titles.
I'm presuming the negative results they saw from "rendering with JavaScript" meant, specifically, they moved certain page rendering tasks to the client side vs. server side. (It wasn't explicitly clear that was the case, but implicitly so).<p>If so, that's a big reinforcement of the importance of server-side rendering for SEO purposes or, for you JavaScript fans, isometric applications.<p>I know this is talked about a lot anecdotally, but it's interesting to see it so starkly laid out in an experiment by a major site.
When I spoke with some senior Google search guys, this is what I walked away with:<p>Be a good player. Provide good content that your users care about. Positively add to the Web. Google will find you.
On the duplicate title test, I wonder if they saw no difference because they put the unique element after the pipe (e.g. "... on Pinterest | {pins}" ).<p>Maybe google ignores after the pipe because that's where people always put branding:
{title} | {meaningless company name}.
This doesn't demystify SEO. There are just so many factors involved in SEO, and unknown factors. Something that works today may not work tomorrow. The only true guideline to go by is to create great content for humans, period.
Good to see AB testing applied to SEO instead of just UX changes. The tl;dr version:<p>A/B testing is:<p><pre><code> bucket(hash(experiment, user identifier))
</code></pre>
A/B testing for SEO is:<p><pre><code> bucket(hash(experiment, url))</code></pre>
I don't think I have ever come across PInterest by searching. Am I just searching for the wrong things? I thought PInterest was largely a glorified bookmarking service - what original content is there that the search engines could pick up?
Are they comparing two URLs on the same domain? Is that really worthwhile? How much is it about the domain and how much about the single URL on the domain?<p>If I link to a specific URL, do I give PR to that URL or to the domain itself?
Is there a chance that a site as large as Pinterest might have their search rankings dominated by some hand picked value rather than the many other factors that might affect a typical site?
This is pretty meaningless in context, when you visit pinterest the site is login gated. Sure that might look good on paper but it's a short term strategy.
Pinterest likely cloaks traffic. Internal site traffic to a pinboard will require a log-in/register to continue viewing the board. Traffic from the Google index is allowed to continue viewing the board. That is treating search engines differently than human users (unless they throw up this log-in wall for crawling Googlebots too, which would severely hamper crawl-ability of the site).<p>You do not change the page titles of a site so you can get a few more visitors from Google's algorithm, you change the page titles of a site, because they are ambiguous for all your users. If you want to create more unique page titles you can credit the username that created the board to the pagetitle, instead of a meaningless and everchanging "number of pins on this board". For example "Mickey Mouse on Pinterest by John Doe" or "Mickey Mouse | John Doe | Pinterest".<p>You run A/B tests to test if user engagement with the site increases. If you run A/B tests to test if certain changes increase your search engine rankings/Google visitors then you are reverse engineering Google. Especially with a large site like Pinterest this may gain you some ill-gotten benefit over sites that do play nice:<p>"If we discover a site running an experiment for an unnecessarily long time, we may interpret this as an attempt to deceive search engines and take action accordingly." [1]<p>Even on a site like Pinterest I see low-hanging on-page SEO stuff that could be implemented better. For instance the header for a pinboard starts at line 788. Proper content stacking/HTML code ordering ensures that information retrieval bots do not have to wade through many menu's of boiletplate text, before they get to the unique meat of the page.<p>There is basically one single way to do legit SEO and most of the tips and techniques for that are transparently written in the Google Webmaster Guidelines [3]. The good news is that this has not changed much at all over the years, so one can stop algo chasing, and start improving the site for all users and all search engines.<p>BTW: The blog has no canonical tag [2] and puts the _entire_ article inside the contents of '<meta name="twitter:description"'.<p>[1] <a href="http://googlewebmastercentral.blogspot.nl/2012/08/website-testing-google-search.html" rel="nofollow">http://googlewebmastercentral.blogspot.nl/2012/08/website-te...</a><p>[2] <a href="http://googlewebmastercentral.blogspot.nl/2009/02/specify-your-canonical.html" rel="nofollow">http://googlewebmastercentral.blogspot.nl/2009/02/specify-yo...</a><p>[3] <a href="https://support.google.com/webmasters/answer/35769?hl=en" rel="nofollow">https://support.google.com/webmasters/answer/35769?hl=en</a> "Following these guidelines will help Google find, index, and rank your site."
first: i will not comment on the actual findings teased in this blog post, because we miss lots of information, data and context (javascript to make rendering faster, was it really the first pageview that was faster or was this aimed at the second, client side rendering actually makes rendering of the first pageview slower (please, proof me wrong))<p>second: this is the way SEO should be done - a systematic analytics dev. dirven approach - and they solved one of the challenges big sites regularly face SEO wise: running multiple onpage (SEO is just one aspect) tests simultaneously over chunks of their sites.<p>most of the time you are stuck with setting a custom variable (or virtual tracker) in google analytics of the pages you changed (and a control group)
the issue with this approach is that GA only reports a sample of data (50 000 rows a day) and for big sites this sample becomes insignificant very fast, especially if you run tests.
additionally it's not easy to compare the traffic figures of the tracked page-group with log-data like crawling, so you need a custom built solution to connect these dots.<p>this leads us to a serious limitation of the GA and pinterest approach: connecting their data with google serp impressions, average rankings and clicks. yeah, traffic is the goal of SEO, but it is pretty late in the funnel, crawling is pretty early in the funnel, you can optimize everything in between. for the in between we are stuck with google webmaster tools for reliable data (at least it's data directly from google and not some third party). so to get most out of such tests you must set them up in a way that they traceable via google webmaster tools.<p>and to make something traceable in google webmaster tools basically means you have to sice and dice them via namespaces in the URL.<p>simple setup<p><pre><code> www.example.com/ -> verify in google webmaster tools
www.example.com/a/ -> verify in google webmaster tools to get data only for this segment
www.example.com/b/ -> verify in google webmaster tools, ...
...
</code></pre>
make tests on /a/ -> if it performs better than the rest of the site, good<p>the issue there is that to have a control group you need basically move a comparable chunk of the site to a new namespace i.e. /z/
and site redirects are their own hassle but well on big sites most of the time are worth it. also you don't have to move millions of pages most of the time a sample on the scale of 50 000 pages is enough (p.s.: every (test) segment should of course have it's own sitemap.xml to get communicated / indexed data)<p>one more thing: doing positive result tests it actually quite hard - doing negative result tests is much easier. make a test group of pages slow, see how your traffic plumbles. make your titles duplicate, see your traffic plumble, ... yeah, these tests suck business wise, from an SEO and development point of view they are a lot of fun.<p>shameless plug: hey pinterest, check out my contacts on my profile. the goal of my company is to make all SEO agencies - including my own - redundant. we should do stuff.
This reads like a poor attempt at a software engineer dabbling with SEO. 'Growth' team indeed.<p>Hire an SEO - or at least a digital marketer with SEO credentials - and do some proper optimisation.