I was asked to do an analysis of Fannie and Freddie during a job interview in 2001. A 3 page report of 2 institutions I'd never heard of before, with a stack a papers around 3 feet high consisting of a variety of financial statements, promotional material, and news clippings, to be completed in pen within 3 hours.<p>Not being from the US, unaware of these institutions, and boggled how the concept of the state backing fixed rate mortgages was sensible, I wrote my 3 pages and somehow got the job.<p>> It should not be overlooked that in the not-so-distant past, i.e. when I worked as a mortgage analyst, an analysis of loan-level mortgage data would have cost a lot of money. Between licensing data and paying for expensive computers to analyze it, you could have easily incurred costs north of a million dollars per year.<p>If it existed. It did not. Computers were not needed to analyse a nice big data set, because a nice, big, transparent, data set, did not exist. Those that dug did quite nicely realizing that a big data set didn't exist did so by digging themselves, being confused, and realizing everyone else was confused / delusional too.<p>Splitting things by state and making data available is a level in transparency. But it is fine-tuning an organ based on where the horn is, and not understanding what the notes played are.<p>Providation of this type of data is badly stitching a bad gash. It confirms what has been known for years. A better question would be "If you're issuing bonds based on loans to people you have a FICO 'thin file' score of 600 for, that you've not done basic background checks for, and they're seeking to borrow 10 times their annual income, don't you see something wrong?"<p>Basic questions and understanding underlying data are more important than optimization of headline metrics.