1. To my mind, the fundamental problem OAuth solves is "letting a user decide" to share data with an app, without making the user responsible for jumping back and forth between the app and her API provider (her bank, in this case). OAuth holds the user's hand through a series of redirects, and the user doesn't have to copy/paste tokens, or remember where she is in the flow, or know what comes next. Does TAuth have a similar capability? The blog post mentions "User Tokens" in passing, but doesn't define or describe them.<p>2. OAuth 2.0 is published as an RFC from IETF. It may be a bear to read (and yes, it's a framework rather than a protocol!), but the spec is open, easy to find, and carefully edited (<a href="https://tools.ietf.org/html/rfc6749" rel="nofollow">https://tools.ietf.org/html/rfc6749</a>). Is TAuth meant as a specification, or a one-off API design? If it's a specification, has there been an attempt to write it down as such?
One big problem with OAuth on mobile apps is this scenario. I've seen this in the wild for non security-critical apps. As far as I can tell, it's not a bug so much as it is a problem with the OAuth protocol and webview permissions:<p>1) MyLittleApp wants OAuth access to BankOfMars<p>2) MyLittleApp bundles BankOfMars SDK into MyLittleApp<p>3) MyLittleApp requests oauth access via SDK<p>4) SDK opens WebView for user to log into BankOfMars<p>5) MyLittleApp has full control over the DOM presented to the user since the WebView is technically its own.<p>6) MyLittleApp extracts the user's password from the DOM of the WebView<p>7) MyLittleApp disappears and... profit?
In my opinion, having worked extensively with OAuth2 (mostly in the form of OIDC) and other modern AuthN/Z protocols, the author of this post does not truly understand OAuth 2, nor have they looked in any appropriate depth into supplements like OIDC or alternatives.<p>For one, bearer token [1] is only one type of "Access Token" described by the OAuth2 spec [2]. In fact, the OAuth2 spec is very vague on quite a few implementation details (such as how to obtain user info, how to validate an Access Token), which the author seems to just assume are part of the spec, as he does with bearer token. Other parts, like the client/user distinction, and the recommendation for separate validation of clients, the author ignores completely, generating his own (ironically mostly OAuth2-compliant [3]) spec.<p>> Shared secrets mean no non-repudiation.<p>Again, not true. Diffie-Hellman provides a great way to come to a shared secret that you can be cryptographically sure (the adversary's advantage is negligible) is shared between you and a single verifiable keyholder.<p>> Most importantly using JWT tokens make it basically impossible for you to experiment with an API using cURL.<p><i>sigh</i>. If only there was a way to write one orthogonal program that can speak HTTP, and in a single cli command send that program's output to another program that can understand the output. Maybe we could call it a pipe. And use this symbol: |. If only.<p>> OAuth 2.0 is simply a security car crash from a bank's perspective. They have no way to prove that an API transaction is bona fide, exposing them to unlimited liability.<p>TL;DR: This article, led by comments like this ("unlimited", really?), strikes me as pure marketing (aimed at a naive audience) for a "spec" that probably would not exist had proper due diligence into alternatives, or perhaps some public discussion, occurred. At the very least, inconsistencies (a few of which I've mentioned above) could have been avoided.<p>[1] <a href="https://tools.ietf.org/html/rfc6750" rel="nofollow">https://tools.ietf.org/html/rfc6750</a> [2] <a href="https://tools.ietf.org/html/rfc6749" rel="nofollow">https://tools.ietf.org/html/rfc6749</a> [3] <a href="https://tools.ietf.org/html/rfc6749#section-2.3.2" rel="nofollow">https://tools.ietf.org/html/rfc6749#section-2.3.2</a>
Some features that I think a system like this should have:<p>1. The client (or the device holding the authentication token, or the app, etc) should be able to maintain (on its own storage!) an audit log of all transactions it has authorized, that log should be cryptographically verifiable to be append-only (think blockchain but without all the Bitcoin connotations), and the server should store audit log hashes <i>and verify that they were only ever appended to</i>. And the server should send a non-repudiable confirmation of this back to the client.<p>Why? If someone compromises the bank or the bank-issued credentials (it seems quite likely that, in at least one implementation, the bank will know the client private keys), the client should be able to give strong evidence that they did <i>not</i> initiate a given transaction by showing (a) their audit log that does not contain that transaction and (b) the server's signature on that audit log.<p>2. Direct support for non-repudiable signatures on the transactions themselves. Unless I'm misunderstanding what the client certs are doing in this protocol, TAuth seems to give non-repudiation on the session setup but not on the transactions themselves. Did I read it wrong?<p>3. An obvious place where an HSM fits in.<p>How does TAuth stack up here?<p>Also, there's a very strange statement on the website:<p>> to unimpeachably attribute a request to a given developer. In cryptography this is known as non-repudiation.<p>Is that actually correct as written or did you mean "to a given user"?
As much as I love Stevie, teller.io and this demo: Why not both?<p>OAuth 2 is not "bad" in general, you just need to consider the implications of using it. If you have an API that allows clients to move customers' money or take out loans, you should take additional steps to defend against MITM attacks. For example using client side certificates :)<p>That said, TAuth looks really good and tidy. Of course the developer may still lose the private key, so in the end you'll always need to additionally monitor API requests for suspicious behaviour.
The main complaint about OAuth 2.0 seems to be that bearer tokens are a bad idea. Well, you can implement OAuth 2.0 to use any kind of token you want, with any property you want. People do bearer tokens because it is easy, not because it is required.<p>The secondary complaint seems to be that OAuth 2.0 is a mess. That one I heartily agree with! A few years ago I wound up having to figure out OAuth 2.0 and wrote <a href="http://search.cpan.org/~tilly/LWP-Authen-OAuth2-0.07/lib/LWP/Authen/OAuth2/Overview.pod" rel="nofollow">http://search.cpan.org/~tilly/LWP-Authen-OAuth2-0.07/lib/LWP...</a> as the explanation that I wish I had to start. In the process I figured out why most of the complexity exists, and whose interests the specification serves.<p>The key point is this: <i>OAuth 2 makes it easy for large service providers to write many APIs that users can securely authorize third party consumers to use on their behalf. Everything good (and bad!) about the specification comes from this fact.</i><p>In other words, it serves the need of service providers like Google and Facebook. API consumers use it because we want to access those APIs. And not because it is a good protocol for us. (It most emphatically is a mess for us!)
> One of the biggest problems with OAuth 2.0 is that it delegates all security concerns to TLS but only the client authenticates the server (via it's SSL certificate), the server does not authenticate the client. This means the server has no way of knowing who is actually sending the request.<p>That's not just plain not true. In the OAuth2 authorization_code grant, a "confidential" client is REQUIRED to send a client_id and client_secret to authenticate itself to the server.<p><a href="https://tools.ietf.org/html/rfc6749#section-4.1.3" rel="nofollow">https://tools.ietf.org/html/rfc6749#section-4.1.3</a><p>> If the client type is confidential or the client was issued client
credentials (or assigned other authentication requirements), the
client MUST authenticate with the authorization server as described
in Section 3.2.1.<p>Now, this doesn't work for "public" clients like a pure-javascript webapp, but that's a separate question.<p>Count me as pretty dubious of letting some unknown group try to re-implement bank authentication without fully understanding the specification they're trying to fix.
Their description of the MITM attack is entirely dependent upon how the authorization server validates redirects in the implicit and authorization code grant flows. This is tied to how client registration is performed. So, if you want to ensure that the authorization code or access token is only delivered to a redirect URI that is trusted, that should be part of the policy enforced in your infrastructure... More specifically, you can require domain verification and validation as part of the client registration process, and I would expect that at a minimum when dealing with delegated access to financials.<p>Another alternative to this would be to perform an OOB flow, wherein the redirect URI is actually hosted on the authorization sever itself and the client can scrape the access token from the Location header.
This is unnecessary. Many banks can and will enforce 2-factor authentication with their oauth flow, which sufficiently validates the client and would prevent a MITM attack.<p>Your whole premise is surrounded by the threat a client browser would not properly validate a server certificate... come on... really?
I visited the homepage (<a href="https://www.teller.io/" rel="nofollow">https://www.teller.io/</a>) and got a warning about the SSL cert being invalid. Kind of ironic. :)
This is unlikely to work - developers in general can't cope with managing SSL certificates. They won't know what to do with them or handle them securely.<p>You need full integrity verification, with a secure store and whitebox crypto keys to make such a scheme secure.
Problem one exists because, apparently, MITM is a problem with TLS because it's possible for bogus certificates to get through? Well... I guess. But then that's a TLS problem. And your entire banking website is served through TLS. So, if it really is an issue, then solving it just for auth is like putting an Abus padlock on a screen door.<p>Problem two bemoans the bearer token in Oauth 2. Yes, it's not as secure as OAuth 1, but it's also far simpler. But you don't have to use bearer tokens; you are free to use MAC tokens instead. Why reinvent the wheel?
I think my biggest bug here is that as far as I understand this flow, it essentially says that a given certificate that is generated and signed by a third party (Teller in this case) would be expected to bundle this private certificate with the application. Isn't it possible to extract the certificate from the app bundle after the fact? Or am I missing something here...
The premise for adding client certificates is a MITM made possible because careless app developers will disable server certificate validation.<p>So, how exactly does adding a client certificate solve that problem? If server certificate validation is disabled on the client, the MITM can still accept the client certificate and substitute their own.<p>The difference is that in this case the attacker will gain access to the API but the client will not, unless they are being actively MITM. If the client tries to access the API outside the MITM their client cert will be rejected as invalid.
Client certs are still a bit of a pain. There is already an IETF spec in the works, called Token Binding, on how to bind tokens to key pairs that clients maintain, and create on demand.<p><a href="https://github.com/TokenBinding/Internet-Drafts" rel="nofollow">https://github.com/TokenBinding/Internet-Drafts</a><p><a href="http://www.browserauth.net/home" rel="nofollow">http://www.browserauth.net/home</a><p>It's already implemented in Chrome.
Actually oAuth 1.0 is less secure than oAuth 2.0 because it engages in security theater. It doesn't even require https and as a result any man in the middle can eavesdrop on the requests. And if the token is leaked, it's game over.
So, it seems like the main concern here is that a client will not validate the SSL certificate, so the SSL layer is now manually added into javascript code using the WebCrypto API to prevent this? I see not validating SSL certificates being a potential problem with something like a REST API, but is it common to disable SSL verification at the browser level where you would need to use javascript to do this?
One of the things about OAuth is that the user needs to check the website url where he is giving his credentials. Amusingly, many mobile apps seems to forget this important bit. The redirect me to a web ui <i>inside</i> the app itself and expect me to enter my password inside the app. I guess they thought this was a better user experience than handing over control to the browser :/
Two things:
1. Why not just add client-side certificates to an OAuth-based API?
2. Client certificates do not prevent an attacker from pretending to the be server.<p>Let's say your API server followed the standard OAuth 2.0 protocol except required client-side certificates? Would that be as secure as TAuth?<p>If so, then the OAuth 2.0 option has the advantage of being well-supported by existing libraries and well-understood from a security perspective. It's less likely that a previously-unknown issue with OAuth 2.0 will crop up and force everyone to scramble for a fix.<p>And while client certificates prevent an attacker from forging client requests (i.e. tricking the API server) an attacker can still trick the client. An attacker capable of MITM'ing server-cert-only HTTPS can also trick TAuth clients into sending it's banking API requests the attacker's servers. It can respond to those requests with whatever it wants.<p>To summon the activation energy to adopt (or switch to) a new, less-popular protocol, I'd expect more security benefits.
> <i>Most importantly using JWT tokens make it basically impossible for you to experiment with an API using cURL. A major impediment to developer experience.</i><p>Why can't a developer do exactly what you did in your second video, which is to save the JWT to a variable, and then use it in the request?<p>Heck, you could create a quick wrapper "jwt_curl"/"jwt_http" or something that automatically pulled in that variable…<p>There's two big things about this scheme that leave me confused: how do you know what the correct certificate for the client is? Do you just send it over HTTPS? But then, one of your opening premises is that we don't get TLS verification correct and are open to MitM, so this seems to contradict that, or are we hoping that "that one request won't be MitM'd", like in HSTS? (which seems fine)
How does this compare with SimpleFIN: <a href="https://bridge.simplefin.org/info/developer" rel="nofollow">https://bridge.simplefin.org/info/developer</a><p>SimpleFIN seems simple and still secure. But maybe I'm missing something?
It's still a bit unclear to me how a client generates his certificates and somehow links it to his bank account. The demo shows a web-UI generating it, but would a mobile user have to visit the website to fetch a certificate?
By logging into a 3rd-party site using Google+, for instance, you remain logged-in to Google when you go to <i>any</i> other web site.<p>And the authenticator clearly does not require this global behavior: if you immediately log out from a Google page, you remain “logged in” at the 3rd-party site that you started from. So why doesn’t it log you out globally? Probably to convenience Google, at the expense of security when you auto-identify yourself to who knows how many other web sites before you realize what happened.<p>Logging into one page with one set of permissions should mean “LOG INTO THIS PAGE”, not “REVEAL MY SECRETS TO THE INTERNET”.
Let me see if I understand this correctly:<p>1) Problem: app authors disable TLS (server) cert validation.<p>2) Solution: give each app author the responsibility of managing and distributing a client side certificate.<p>Sounds like now you have two problems? In particular, you now have to make sure that every lost/compromised certificate is added to your growing CRL? And you need app developers that demonstrably do not even have the vaguest idea how public key cryptography can be used for authentication to take responsibility for doing this? And there's still no guarantee that they won't disable certificate verification?<p>Did I miss anything?
At Qbix we developed a much more secure way than oAuth to <i>instantly</i> personalize a person's session -- and even connect the user to all their friends -- while maintaining privacy of the user and preventing tracking across domains by everyone except those they choose to tell "I am X on Y network" ... it also restores the social graph automatically on every network and tells you when your friends joined one of your networks.
>The EU is forcing all European banks to expose account APIs with PSD II by end of 2017.<p>Any reference for this? The text of PSD II is here — <a href="http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32015L2366" rel="nofollow">http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:320...</a> — but it's too long and it isn't clear to me whether it is actually ratified.
What bothers me about OAuth is the way you're on one website and are then asked with a pop-up to enter your Gmail or Facebook etc. password as a normal part of the flow. Users aren't savvy enough to check the URL or understand what's going on here so getting them used to this flow is asking for phishing by the look of it. Something that forced two factor authentication would be good.
It's pretty strange to see a new authentication protocol (they describe it as authorization protocol, but they do authentication as well), just as W3C's WebID-TLS is being finalised. Oh, did I mention it uses client X.509 certificates as well? And how does the author imagine that banks would rely on his new protocol to ensure non-repudiation?
> The most realistic threat is the client developer not properly verifying the server certificate, i.e. was it ultimately signed by a trusted certificate authority?<p>From an attackers point of view, this sounds like a very tiny ray of hope. It sounds like a cool feature/vulnerability that will probably be going away soon because it is so easy to fix.
I didn't see anything about renegotiation. If clients present their certificates during first handshake, it will lead to security concerns. Attackers could observe client's certificates (extract meta-data, de-ano clients ...). If renegotiation is used it will drastically reduce "Bonus DDOS mitigation"
tl;dr: it forces client to have a certificate so that the server can verify.<p>This is kind of a pet peeve. Anyone who ignores or wants to disable server certificate verification has to understand the risk.
It's kinda crazy that it has taken so long for someone to actually take an initiative and attempt to make the authentication more secure.<p>I wonder if this is a custom built solution or if Teller.io is using something like HashiCorps Vault to do the whole SSL cert dance.<p>Either way, this looks promising.
Relying on SMS for bank security has always seemed crazy to me. It's not secure. Didn't Telegram creator just got hacked by the Russian mobile provider that sent an SMS to itself?