This is unacceptable, obviously. It's a massive failure of their security testing protocols. I'm not particularly surprised that an vulnerability like this would get written into the code, it's an easy mistake for an inexperienced developer to make. But I'm not going to pile on the developer. We laud "separation of concerns" in our architecture, and this pattern applies to the organization of software development teams.<p>I don't expect every developer to be aware of every vulnerability. But I do expect that a financial institution has a specialist somewhere that audits the code before it is sent for testing ("white box"), and then I expect them to have an independent audit team probe for vulnerabilities ("black box").<p>After the inexperienced developer has had his code rejected for various flaws, he will become quite aware of the obvious ways things like this can go wrong.<p>Don't get me wrong, I expect that the vast majority of developers wouldn't make this mistake in the first place. But if you aren't a specialist, it is pure hubris to think that you write code that is hardened against <i>all</i> of the attacks out there. And if you have a vulnerability, it really doesn't matter if it's an embarrassingly simple vulnerability or one that requires sophisticated techniques to uncover and exploit. Either way, you're road kill.<p>High-value targets like banks need two security specialists (the code audit and the penetration test) to accompany the development specialists. That's simple separation of concerns, and it works as well in team organisation as it does in code organisation.
In 2000(?) I employed this same 'hack' against Ameritech's online bill viewer, but couldn't get anyone's attention. I called several people at Ameritech, but couldn't get through to anyone who understood anything I was saying.<p>I tried to get ahold of newsmedia, but realized afterwards that the links I was sending did have a session timeout associated, so by the time a reporter clicked a link, they got nothing.<p>Finally, I managed to get in touch with someone at 'fuckameritech.net' (IIRC) - a consumer watchdog (I hesitate to say 'group' - I think it was just one guy) who said "I'll take care of it". He made some contacts - I think got it to a reporter in Chicago, and <i>that afternoon</i> Ameritech's online bill view and pay was taken down (a wednesday IIRC) and it wasn't brought up again until Monday.<p>The 'fix' was not much - they were now hashing the account number in some massively long (128 char?) ID instead of just your account number. But it was all still visible in the URL, which was the bigger problem to start with, because it encouraged 'hackers' like me to change my account number by one digit.<p>I suspect others had noticed this before, tried to contact citi, and couldn't get in touch with anyone who understood what the caller was saying.<p>Companies need separate 'web vulnerability' hotlines to call/contact to report issues like this - perhaps just hidden in the 'view source' - if you're good enough to find the info, you know what you're doing enough to report a problem. Too low a bar?
A security guy weighs in on it here:<p><a href="http://idunno.org/archive/2011/06/14/citibank-hacked-ndash-dumb-developers-dumber-security-consultants.aspx" rel="nofollow">http://idunno.org/archive/2011/06/14/citibank-hacked-ndash-d...</a><p><i>"This was not sophisticated or ingenious, as reported, this was boringly simple. ... OWASP has had Insecure Direct Object references on it’s Top 10 list for years. It’s in the SDL Threat Modeling tool. Any security firm worth its salt checks for this"</i><p>Yes, there's a good description of this kind of trivial "hack" in the Open Web Application Security Project Top 10: <a href="https://www.owasp.org/index.php/Top_10_2010-A4" rel="nofollow">https://www.owasp.org/index.php/Top_10_2010-A4</a>
I'm kind of torn on this. On the one hand, yeah, it is a trivial flaw.<p>On the other hand, so is waving a gun at a teller. That attack has been around for decades and still works a few dozen times a year, because the cost/benefit analysis says that after hardening the banks a little it is easier to just lose a few tens of thousands of dollars every once in a while than it is to give them the Secret Service's attention to physical security.<p>That is hardly the only systemic vulnerability in the banking system. For example, let's suppose I want to compromise your account number and credentials sufficient to take you for every penny you possess. You know what I need? A check of yours. Any will do. Everything I need to create a demand draft against your account is on every check you have ever written. Every employee of every business you have ever paid by check got the keys to your financial kingdom.<p>You may not be aware of it, but since those credentials are assumed compromised, the security is in a) catching me when I use the demand draft to suspiciously drain your account and b) failing that, making you whole out of the bank's pocket. The numbers have been crunched: it is vastly, vastly more efficient to treat fraud as a cost of doing business than it is to tighten the screws 100%<p>The attack surface on software the size and complexity of a bank's is like the Death Star, except any single rivet being out of place will eventually result in this headline.<p>(Step #1 in tightening the screws would be turn off public facing websites, because inexpert users plus compromised machines means that no banking website will ever be secure, even without coding errors. This will never happen, because the provable cost savings of moving customers to online banking roflstomp over the marginal fraud risk.)
How is this even possible? In my very first website I built from scratch using PHP, I paid attention to the possibility of this. I can't say for certain that I fully protected against it, but I <i>tried</i>. That little trick would not have worked.<p>How is it that a bank, of all places, pays money for a web infrastructure, and manages to employ people who don't even think about the most basic of attacks? I've been changing info in URLs since I started using the internet.
You'll surely enjoy the quote from the linked article:<p>=======<p>The method is seemingly simple, but the fact that the thieves knew to focus on this particular vulnerability marks the Citigroup attack as especially ingenious, security experts said.<p>=======<p>Sorry but no, this isn't ingenious - it's really the basics!!!
If this really was the "hack", you can be sure that Citi is has opened themselves up to a whole world of negligence lawsuits. This is the same as having a vault where any customer could walk in and just browse around the safe deposit boxes. Sure, it might be tough to be authenticated to get into the vault, but once you're there...<p>This is something that should cause the immediate dismissal of the CIO, but sadly, probably won't.
"Gawker media blog spam".<p>Visit <a href="http://www.nytimes.com/2011/06/14/technology/14security.html?pagewanted=all" rel="nofollow">http://www.nytimes.com/2011/06/14/technology/14security.html...</a> instead
A naive question here:<p>Suppose I accidentally stumble upon a gaping security hole in my bank's online service (or any other online service for that matter).<p>Am I legally obliged to notify them of that security bug? Can I offer the bank my assistance, for hire, in solving the bug without it constituting blackmail? (i.e. I'd be happy to help you solve this at a $300/hr rate)
How can a bank this large have such poorly designed security? It's ridiculous. Hopefully, all these latest hacks get everyone else to treat security more seriously. There could be a lot of other banks that do the same thing as Citigroup. So if one gets hacked, at least the others will remember to review their security policies, so it doesn't happen to them, too.
What's worse: every customers ID in the database was stored in the URL or that <i>there was no ACL to test against</i>? If a user is logged in, you have their account ID stored in a session. If they navigate to a page that their account ID can't see (like another person's account), then kick them out. Astoundingly simple.
Seriously how do people stay in their jobs allowing crap like this to happen? The CIO or CTO at Citi should get the boot. Until companies like this and Sony start making examples of people, this kind of sloppiness that gives our industry a bad name will continue.
I've been ranting on this stuff for a couple years now to my friends. There are some alarming trends.. First off, a pen test is often treated the same as an attorney client relationship. If the test turns up particularly costly bad news, I've seen a handful of testers have the relationship essentially severed, received some hard language about talking about it from a lawyer and then received a check from a private account as if the company doesn't want to leave any traces that they actually knew about the problems. (I'm not joking, some medium sized companies have done this)<p>With some of the regulations the big missing piece is openness, there is no transparency into it at all. Any audited company should say who audited them and then after some period of time, 180 days maybe, the audit should be made public. The business risk is that customers will leave, in many cases like Playstation Network, customers effectively can't leave, they've already invested in something and there isn't an alternative. In many other cases it's not typically going to be widely publicized. If the customers can't leave, en mass, there is no business pressure for security and without any transparency the regulations will simply be gamed.
I discovered that my bank, Banque Nationale, used GET to delete transactions from the History. Then somebody could send a mail to the bank clients with an image linked to this Get action and delete the transactions of the client if he was logged into the bank and reading his emails at the same time.
It wasn't a big risk, but I don't understand how this went live. I mean if a bank could not get that POST is for C_UD and GET for _R__, then who?
This is like the first thing you learn in web app security (defender or attacker) and you don't even need to write script a tool such as <a href="http://code.google.com/p/fm-fsf/" rel="nofollow">http://code.google.com/p/fm-fsf/</a> will scrape the data quickly.<p>Even though it's insanely easy to spot and exploit it's also easy to miss it while coding. But any decent pen-tester will find it. Regardless, unacceptable for a finance company.
I know exactly how this occurs because I recently met a "Senior Web Developer" at an established business who was basically their acting architect because he was the their first coder and therefor his non-technical bosses regarded him as some kind of genius because he knows how to unjam the office printer. He didn't know a lick of Unix, didn't understand load balancing, and had very weak SQL skills. He was your typical framework junky who couldn't imagine writing even the simplest web app without a framework to do all the heavy lifting. All he wanted was for me to recommend an even simpler web framework so he wouldn't have to write any SQL at all. No doubt some day his code will be generating headlines like this one, and he will no doubt blame whatever framework he used and his bosses will simply mandate that they switch to a more secure framework pronto, and they'll promote this boob as their Senior Architect to lead the project.
The big question is why the structure of the IT department lent itself to doing something so stupid<p>You can fire the CIO, you can replace the offshore developers with onshore, or vice versa, but experience says it won't matter.<p>I looked in amazement at googletesting's dependancy graph test suites yesterday, and realised that the playing field is not flat at all.<p>Reading and writing code is the literacy of the 21st C.<p>And the end most big companies are like newspapers owned and managed by illiterates.<p>It does not matter how you rearrange the strucutre or the hierarchy, when the chips are down decisions will be made on
what the illiterate management understand is the best way to work.
As such it is infinitely unlikely that the decision then will be set up to support what a literate person would decide.<p>Until a generation of coders grows up, or all illiterate companies go bankrupt, this will merely be one of a myriad of pathologies
exhibited by large companies run by the illiterate.
I have really problams believe that this could be true. Not even a first year student would be that stupid to expose any user id in the URL. And read from it without access right checking. For access to the related account data. How would they even get the idea to do such a thing?<p>And as for the "hackers", I guess legally this was not even a break-in. At least in Germany, for legally being a break-in, a computer system must be "specially secured with the intention of preventing access". Well, this system wasn't.<p>...still, I have a hard time believing that it could be true.
Its a basic error.
Tech Architect should be seeing this in milliseconds. Its a total design flaw. Should not be going straight to SQL with just paramters in a querystring, there should at least be authenticated user account verification checking.<p>Also doesn't say much for the company doing security review, its a basic check.
Furthermore to not have a user/onwer id to join on there (no doubt sql back end) is shameful. I mean I can see it now :<p>select x
from accounttable
where accountnumber = @val<p>how about simply :<p>select x
from accounttable
where accountnumber = @accno
and ownerid = @ownerid
What worries me the most is that one expert that the Daily Mail interviewed said "It would have been hard to prepare for this type of vulnerability." The same expert "wondered how the hackers could have known to breach security by focusing on the vulnerability in the browser."<p><a href="http://www.dailymail.co.uk/news/article-2003393/How-Citigroup-hackers-broke-door-using-banks-website.html" rel="nofollow">http://www.dailymail.co.uk/news/article-2003393/How-Citigrou...</a>
I've never worked for a bank, or any company that held sensitive information. I've only worked for companies that sold products to be used internally. Grains of salt are on the table to your left.<p>What this looks like, in the context of all the other serious recent breaches like Sony and the IMF, and from the point of view of someone who's never had to fight this particular battle but knows a little code, is that these corps deployed online apps in the early days when this wasn't a major part of their corporate face. Practices and points of view evolved from an initial environment where there just wasn't as much motivation for criminals to crack apps, because there wouldn't be that much of a market for what they stole. So corps could get away with deploying almost anything, relying on both security through obscurity and security through rarity (breaches were rare due to low profit). People in corporate offices that even knew their corps had these apps would be rare because the prestige of managing these people and apps would be low.<p>The apps we have today would then be direct descendants of the old insecure apps, and in many cases would be built directly on those old apps. Layers of mud, and you can't change the inside layers because old mud is brittle.<p>And now the corps are going up against, not people who are merely exploring or looking for bragging rights, but people working for criminal enterprises that, while not having the global scope of banks, are large enough and <i>focused</i> enough to directly challenge the technical power of the banks. And the banks are working with old, dry mud.<p>Again, grains of salt, but I suspect I'm in the right salt mine.
I have a feeling there's more to it than a clear account number reference in the URL. It was probably a base64 encoded account number or a non-salted hash of the account number (ie. rainbow table-reversible) and the quality assurance analysts probably never questioned this.<p>Disclaimer: I worked in product development making banking software and simple URL hacking was always a standard test.
``security experts'': The attack was "especially ingenious," and "would have been hard to prepare for."<p>The experts are certainly part of the problem.
I wouldn't necessarily blame this on the guy programming this. However, the person who spec'd the application up would be due for a quick demotion. The problem with antiquated bank systems is that the teller is trusted with the access to any account. So when it came to web enabling the old teller application, someone did some screen scraping as a prototype without having a concept of restricting access.<p>There is probably no concept of linking an authenticated account to a restricted set of bank accounts. Instead, they've probably wired it up to CICS directly to retrieve account details. This is why the Quick Fix appears to be obsfucating the account number in the URL.<p>Is there a public report anywhere? Aren't companies required to report all privacy breaches?
Historically, companies like Citi haven't faced any meaningful consequences for putting their customers at risk by not doing "security 101". Will things be any different this time?
The same bank also complains if you want to use more than 9 characters in your password.. That kindof hints about how they store passwords in their database..
It's funny that this wasn't discovered sooner. I suppose everyone figured a security flaw like this would never exist, and never bothered to try. Irony...
> Think of it as a mansion with a high-tech security system -- but the front door wasn’t locked tight.<p>It's more like an apartment building with high-tech locks in the front door and apartment doors. But after you unlock the front door, you can unlock any apartment's door with your key! The keys are all identical, only the number printed on the label is different.
I was reading about this yesterday and about an hour later, Citi called me trying to sell me their fraud protection. I replied "Did it help the 200,000+ accounts that were stolen from you?" and hung up.
I wonder what Citibank is going to do about this now? Are they going to change their customers' account numbers?<p>That's the minimum that should be done for those accounts which were compromised.
I use citibank for my family accounts so may have been impacted. Has anyone posted a list of compromised account numbers someplace I can check against?
So...the first login had to have valid credentials. Someone needed a citibank account to start the scraper-bot. Wonder if the fbi's talked with that guy yet.