David Freedman has this following dialogue in his Statistical Models: Theory and Practice book:<p>Philosophers' stones in the early twenty-first century Correlation, partial correlation, cross lagged correlation, principal components, factor analysis, OLS, GLS, PLS, IISLS, IIISLS, IVLS, LIML, SEM, HLM, HMM, GMM, ANOVA, MANOVA, Meta-analysis, logits, probits, ridits, tobits, RESET, DFITS, AIC, BIC, MAXNET, MDL, VAR, AR, ARIMA, ARFIMA, ARCH, GARCH, LISREL[...]...<p>The modeler's response
We know all this. Nothing is perfect. Linearity has to be a good first approximation. Log linearity has to be a good secont approximation. THe assumptions are reasonable. The assumptions don't matter. The assumptions are conservative. You can't prove the assumptions are wrong. The biases will cancel. We can model the biases. We're only doing what everybody else does. Now we use more sophisticated techniques. If we don't do it, someone else will. What would you do? The decision-maker has to be better off with us than without us. We all have mental models. Not using a model is still a mode. The models aren't totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where's the harm?