That's a great garden path sentence. (or at least beautifuly ambiguous)<p>I initally parsed it as "[The] James Webb Telescope ([which is a tele]scope) [team] claim [they can] compress [pictures somehow] using a bitmap".
As a casual reader not familiar with problem of large JWT scopes, I suggest the strength of the argument for this proposal could be improved by explaining exactly why larger JWT tokens cause problems in practice, defining some metrics or benchmarks that can be used to measure the impact of the problem, then comparing the proposed solution to the baseline approach using those metrics.<p>E.g. are large JWT tokens bad because they take longer to send? Then compare durations between both implementations. Or are large JWT tokens bad because they exceed some maximum supported token size? Then compare how many scopes are supported by the standard approach and new approach before hitting limit.<p>Another thought: the proposed compression scheme requires that each token producer & consumer needs to depend upon a centralised scope list that defines the single source of truth of which scope is associated with which index for all scopes in the ecosystem.<p>If we assume we've set that up, why not generalise and say the centralised shared scope list is actually a centralised JWT compression standard that defines how to compress and decompress the entire JWT.<p>This could be implemented as something like <a href="https://facebook.github.io/zstd/" rel="nofollow">https://facebook.github.io/zstd/</a> running in dictionary compression mode, where a custom compression dictionary is created that is good at compressing the expected distribution of possible JWT values. As long as each JWT producer and consumer is using the same version of the centrally managed compression dictionary the compression & decompression could still occur locally.<p>In practise in any given JWT ecosystem perhaps there's a subset of scopes that commonly appear together. If common enough that entire subset of common scopes could be encoded in 1 or 0 bits using a custom compression scheme tuned to the frequency of observed JWT scopes, instead of compressing each scope independently.
> Your resources (aka content) ACL should not be in the scope itself<p>I have to disagree on this one, being able to specify which resources an OAuth client can tinker with is useful (e.g. only read access to x,y and z repos).<p>I'm also curious on how often these use cases are of needing many scopes / a god JWT, vs. production usage and keeping a narrow scope for the task at hand. There's also the other option (if in charge of authoring the resource server) to have broader scopes that encompass several others.
The access token is designed for saving size. Scopes are in the ID token. The disadvantage of the access token is the required back channel. But then this scheme also needs a shared backchannel so it's pretty much an access token implementation (you could express it as such)<p><a href="https://developer.okta.com/docs/guides/validate-id-tokens/overview/" rel="nofollow">https://developer.okta.com/docs/guides/validate-id-tokens/ov...</a>
How about side-stepping the problem of big lists of scopes by training a zlib or zstd dictionary with the list of scopes, and then compressing the scopes in the jwt token using this dictionary?<p>Obvious benefit is that you can still represent a scope that is not included in the ones used to train the dictionary (vs. the proposed approach that breaks down in this case)
I commend you for the attempt...<p>But the issue you are trying to mitigate (heavy tokens due to complex scope strategy) is a symptom of a bigger problem that has caused OAuth-using folks to scratch their heads for a long while. (of course, also realtes to non-Oauth JWTs)<p>Tldr: The new "Cloud native" way of solving for this is to not push your "Permissions" thru the token.<p>Basically, you limit the scopes included in a token to just a few basic ones (essentially assigning the user to a "Role" - think RBAC)....<p>... and then you use a modern Authorization approach (e.g. CNCF Open Policy Agent) to implement the detailed/fine grain authorization.<p>Its hella cool, declarative, distributed, and infinitely scalable...<p>... and it obviates the whole "heavy JWT" issue before it starts....<p>Source: This is what I do day in day out in my day job....