And with this one of the huge flaws of OAuth comes to play. OAuth just doesn't work with locally installed applications as it's impossible to hide anything there, but OAuth strongly relies on the client having some secret knowledge (the client token).<p>As long as all clients are equal when using the API, this might go well (minus some malicious clients), but once some clients start to be more equal than others - even more so as the service starts to get to be real jerks - then the whole system will fall down.<p>What we see here is twitter's secrets leaking out (though remember: That's more or less public data as it's technically <i>impossible</i> to hide that info - the server has to know) due to them being jerks giving their client preferential access.<p>What does this mean? For now, probably not much as I can imagine the bigger third-party-clients want to behave.<p>It might however make Twitter reconsider their policies.<p>If not, this is the beginning of a long cat and mouse game of twitter updating their keys and using heuristics to recognize their own client followed by twitter clients providing a way to change the client secret[1].<p>Though one thing is clear: Twitter will lose this game as the client secret has to be presented to the server.<p>Using SSL and certificate pinning, they can protect the secret from network monitors, but then the secret can still be extracted from the client, at which point, they might encrypt it in the client, at which point the attackers will disassemble the client to still extract the key.<p>It remains to be seen how far twitter is willing to go playing that game.<p>[1] even if the keys don't leak out, as long as twitter allows their users to create API clients, an editable client secret is a way for <i>any</i> twitter client to remain fully usable
For some reason several of the commenters here are explaining this away as a protocol bug (specifically with OAuth) but the challenge isn't at all protocol specific. Rather, it's a hardship with all client/server apps, specifically in that trusting any client requires additional support from the platform (self-assertion or possession of a secret by the client alone is insufficient) and even then it's known hard problem.<p>This has been true of client/server apps for a very long time, well predating any particular protocol. I'd be sincerely interested in any solutions that people come up with that don't depend on additional extrinsic platform capabilities.
This Twitter client situation reminded me of some ancient (1999) history, so in case you're wondering how far companies will go to try to enforce a theoretically-impossible preference for one client of their service...<p>The MSN Messenger team added America Online chat support to the Messenger client. AOL didn't like that and tried a variety of approaches to reject Messenger. The protocol was undocumented, so there were lots of tricks they could play. At one point they went (IMHO) a bit too far: they deliberately exploited a buffer overflow in their own client!<p>One person's contemporaneous summary: <a href="http://www.geoffchappell.com/notes/security/aim/index.htm" rel="nofollow">http://www.geoffchappell.com/notes/security/aim/index.htm</a>
If you ship a binary to a person’s computer and that binary has a secret embedded in it, that secret will eventually be discovered.<p>This has been discussed here before: <a href="http://news.ycombinator.com/item?id=4411696" rel="nofollow">http://news.ycombinator.com/item?id=4411696</a>
Something I've been pointing out about OAuth for <i>ever</i> is that it's a method for delegating authorization to agents who wish to act on behalf of the user. When it is the actual user him/herself who is acting, there's nothing wrong (and a lot of things right) with username/password authentication.
I use OAuth for an application written in PHP, and as such, there's no possible way to trust the client/secret, given that the source is not obfuscated in any way. This application talks to my own server, and the OAuth flow is basically just a way to avoid storing username/password combinations. The client key/secret have to be treated as permanently compromised, so the only thing I use those for is version usage statistics.<p>The question is, given that your key/secret <i>will</i> be compromised, is there any point in even having it in the OAuth flow?
Interestingly, the keys were posted 5 months ago.<p><a href="https://gist.github.com/re4k/3878505/revisions" rel="nofollow">https://gist.github.com/re4k/3878505/revisions</a>
For people who think this is going to cause drive-by Twitter hijacks, remember that Twitter stores the callback URL on their side for this very reason. Any web app impersonating these apps will fail at the callback stage.
I just tested out the iPhone key/secret using the script here [1] and it worked perfectly. It'll probably bump my actual iPhone client off though I'm assuming.<p>[1] <a href="https://gist.github.com/tcr/5108489/download#" rel="nofollow">https://gist.github.com/tcr/5108489/download#</a>
It could be non dangerous if oauth1/2 would follow my advices (static redirect_uri):<p><a href="http://homakov.blogspot.com/2013/03/oauth1-oauth2-oauth.html" rel="nofollow">http://homakov.blogspot.com/2013/03/oauth1-oauth2-oauth.html</a>
Here is an interesting talk on OAuth by its creator: <a href="http://2012.realtimeconf.com/video/eran-hammer" rel="nofollow">http://2012.realtimeconf.com/video/eran-hammer</a>
if you find this sort of 'research' fun and/or you find this sort of stuff to be the norm rather than the exception ;), you should chk out <a href="https://www.appthority.com/careers" rel="nofollow">https://www.appthority.com/careers</a>