So yesterday we figured out that facebooks Facebot crawler will crawl _every_ url that was recorded by their tracking pixel.<p>I find this highly concerning since:<p>1. they are crawling potentially sensitive information granted by links with tokens<p>2. they are triggering potentially harmful and/or confusing actions in your website by repeating links<p>3. they are repeating requests in a broken way by not encoding url-parameters correctly, for instance url-encoded %2B ends up just as a "+" thus becoming a whitespace (same goes for slashes etc.)<p>4. I could not find a warning or note on their tracking-pixel documentation that pages tracked would be crawled later
> 1. they are crawling potentially sensitive information granted by links with tokens<p>Don't put Facebook tracking on sensitive pages. Actually as a service to your users don't put it anywhere where it doesn't add value.<p>> 2. they are triggering potentially harmful and/or confusing actions in your website by repeating links<p>They only perform idempotent[0]* requests which should not have any negative effect if performed multiple times<p>0: <a href="http://restcookbook.com/HTTP%20Methods/idempotency/" rel="nofollow">http://restcookbook.com/HTTP%20Methods/idempotency/</a><p>* They probably only actually perform GET in reality
If security of sensitive information depends on tokens in the URL, <i>don't just give those URLs to a third party</i>, how would that ever be a reasonable thing to do? (especially since the third party apparently hasn't given you any guarantees on how they treat that, otherwise we wouldn't be having this conversation?)<p>Do your users, your broken software and yourself a favor and don't put Facebook tracking crap everywhere.
I don't mind Facebook crawling pages as long as it respects robots.txt, but for the last few weeks we've been <i>hammered</i> by requests from Facebook-owned IP addresses (millions of hits daily, 50+ for the same URL at times). They don't even set the User-Agent header.<p>There's a bug report regarding the missing header here: <a href="https://developers.facebook.com/bugs/1654459311255613/" rel="nofollow">https://developers.facebook.com/bugs/1654459311255613/</a><p>Unfortunately it seems impossible to get in touch with Facebook devs directly.
I assume the crawler only does HEAD/GET-requests
Your fault if your webpage changes anything based on a GET.<p>Now, if the crawler doesn't honor robots.txt, then you can complain (loudly).
> they are triggering potentially harmful and/or confusing actions in your website by repeating links<p>Not their fault. GET requests should not modify anything.
This is a great example over outrage by someone who doesn't understand how the web works. Unfortunately this is a problem with lots of web developers but the author shouldn't take it personally but should try to learn from it. I can't understand if they don't though because some of the replies here are a little harsh.<p>The summary of what most people are saying including some take aways:<p>- If you put something on the Internet it is public. Period. It is up to you to keep prying eyes away from that page. You can do that with strong mechanisms (like passwords and firewalls) or weak (like robots.txt) but you need to do something. You can't expect a page on the Internet to be private.<p>- Requests should never ever have anything sensitive in the query string. The query string is inherently logged. By your browser history, your web server, any tracking pixels like Facebook you put on the page, etc. If you absolutely must include a token in the URL (like with OAuth) make sure it is a temporary token and is immediately replaced with something more durable like a cookie or local storage, no unnecessary HTML is rendered, and the user is redirected to a new page that doesn't have it in the URL.<p>- GET requests should be idempotent. They should avoid changing any data as much as possible and should not have side effects. This is specified directly in the HTTP spec.<p>- If your page displayed sensitive data it should send the security tokens in a header field (like cookies or authentication). Users who hit the page without that header field should be responded to with a 404.<p>- Your point #3 is an add one. It is a bug on the Facebook side, yes, but it doesn't support your primary argument. In fact, if they fixed that bug it would make the perceived issues in your primary argument worse.<p>- Re #4 they don't need to warn you. See the first bullet. If it is on the internet it is public. Skype, Slack, Twitter, Google, all do the same thing.
Isn't it obvious? For which reason, if not tracking and information gathering, would such a feature even exist?<p>Best solution is still to block Facebook's infrastructures, as always.
<rant>
Shocking!<p>Abuse of power and shady tracking techniques by Facebook? Unheard of!
</rant><p>Seriously, this cannot be surprising after learning that the Messenger app listens to everything you do, all the time. That's just off the top of my head. They are doing this and much more.
A while ago while looking at the apache logs I noticed that the AdWords remarketing pixel does the same, it was trying to crawl private URLs that are only accessible to 'admins' that are not linked publicly. I'm not sure if this is still valid as I blocked by using robots.txt.<p>Also, the same crawler ignores the "User-agent: *" directive in the robots.txt file and you have to add specific rules for it: "User-agent: Adsbot-Google"
> So yesterday we figured out that facebooks Facebot crawler will crawl _every_ url that was recorded by their tracking pixel.<p>Not surprising at all. Would be interesting to see a write up on this.
> we figured out that facebooks Facebot crawler will crawl _every_ url that was recorded by their tracking pixel.<p>I would be more surprised to find out that they didn't crawl everything they can, specifically pages that invite them in.<p>> 1. they are crawling potentially sensitive information granted by links with tokens<p>If the page contains sensitive information you absolutely should not have code that you do not control (<i>any</i> code loaded from third party hosts, not just facebook's bits).<p>As a matter of security due diligence if you have third party hosted code linked into any such pages you should remove it with some urgency and carefully review the design decisions that lead to the situation. If you really must have the third party code in that area then you'll need to find a way of removing the need for the tokens being present.<p>Furthermore, if the information is sensitive to a particular user then your session management should not permit a request from facebook (or any other entity that has not correctly followed your authentication procedure) to see the content anyway.<p>> 2. they are triggering potentially harmful and/or confusing actions in your website by repeating links<p>Possibly true, but again that suggests a design flaw in the page in question. I assume that they are not sending POST or PUT requests? GET and HEAD requests should at very least be idempotent (so repeated calls are not a problem) and ideally lack any lasting side effect (with the exception of logging).<p>> 3. they are repeating requests in a broken way by not encoding url-parameters correctly<p>That does sound like a flaw, but one that your code should be immune to being broken by. Inputs should always be verified and action not taken unless they are valid. This is standard practise for good security and stability. The Internet is a public place, the public includes both deliberately nasty people and damagingly stupid ones so your code needs to take proper measures to not allow malformed inputs to cause problems.<p>You can't use "the page isn't normally linked from other sources so won't normally be found by a crawler" as a valid mitigation because the page could potentially be found by a malicious entity via URL fuzzing.<p>> 4. I could not find a warning or note on their tracking-pixel documentation that pages tracked would be crawled later<p>A warning would be nice, but again unless they explicitly say they won't do such things I would be surprised to find that they didn't not that they do.
Does it crawl URLs blocked by robots.txt? I doubt it. If you don't want well-behaving crawler to crawl your site, there's your answer. But not all are well behaved...
It is the fucking internet, if you put something on there you should expect someone to find it, be it a crawler or an attacker.<p>> 1. they are crawling potentially sensitive information granted by links with tokens<p>If tokens in GET params are your security concept: please leave the entire field.<p>2. they are triggering potentially harmful and/or confusing actions in your website by repeating links<p>So you built something that can be triggered by a simple HTTP request and may have a harmful potential? Wow.<p>3. they are repeating requests in a broken way by not encoding url-parameters correctly<p>You are kidding right? That's a problem to you? Either your Webserver drops these or your routes don't match, end of story.<p>4. I could not find a warning or note on their tracking-pixel documentation that pages tracked would be crawled later<p>Not a problem, you put it on the web and it will be crawled. Did you ever use Chrome? They report every URL you type to the Google Crawler. Read that anywhere lately?