Very happy to see this landing in Firefox!<p>For the people wondering what the motivation is, <a href="https://www.w3.org/TR/fetch-metadata/#intro" rel="nofollow">https://www.w3.org/TR/fetch-metadata/#intro</a> has a good summary:<p><i>Interesting web applications generally end up with a large number of web-exposed endpoints that might reveal sensitive data about a user, or take action on a user’s behalf. Since users' browsers can be easily convinced to make requests to those endpoints, and to include the users' ambient credentials (cookies, privileged position on an intranet, etc), applications need to be very careful about the way those endpoints work in order to avoid abuse.</i><p><i>Being careful turns out to be hard in some cases ("simple" CSRF), and practically impossible in others (cross-site search, timing attacks, etc). The latter category includes timing attacks based on the server-side processing necessary to generate certain responses, and length measurements (both via web-facing timing attacks and passive network attackers).</i><p><i>It would be helpful if servers could make more intelligent decisions about whether or not to respond to a given request based on the way that it’s made in order to mitigate the latter category. For example, it seems pretty unlikely that a "Transfer all my money" endpoint on a bank’s server would expect to be referenced from an img tag, and likewise unlikely that evil.com is going to be making any legitimate requests whatsoever. Ideally, the server could reject these requests a priori rather than delivering them to the application backend.</i><p><i>Here, we describe a mechanism by which user agents can enable this kind of decision-making by adding additional context to outgoing requests. By delivering metadata to a server in a set of fetch metadata headers, we enable applications to quickly reject requests based on testing a set of preconditions. That work can even be lifted up above the application layer (to reverse proxies, CDNs, etc) if desired.</i>