decimal fractions can't necessarily be represented by a finite amount of bits (just like the fraction 1/3 is 0.333333...). Computers work with finite amounts of bits, so you're losing some accuracy. Other representation methods could be used to store the numbers (e.g. store the nominator and denominator as whole numbers), but you can't get around the fact that an infinite amount of numbers can't be represented with a finite amount of bits (e.g. the square root of two can't be represented accurately as a ratio).