This is a great question. What do we lose by thinking of real computers as Turing machines when they are in fact finite ? For one thing I believe the halting-problem doesn't exist in reality because it only holds for infinite systems (ie. Turing machines), not finite ones (ie. physical computers).