In math we start with representations of what (mathematical problem) we have, operations on that representations and (mechanically) get the representations of what we find as result, which we then understand in order to apply to the problem. Actually, since (full) input is equal to output (we may be interested in only a part of output, but that's not important), the whole transformation is a change of representation.<p>Fully homomorphic encryption certainly looks like it, all basic arithmetic operations are still present, only the representation of them is encryption key-dependent. The mechanical transformation from one representation to another could be expensive, in terms of (chosen? can we choose e.g. an atomic operation which sums an arbitrary-length list, considering it O(1)?) operations. Does it look like our interest in transforming representations depends on chosen operations basis? Isn't what we're doing (when saying something is e.g. polynomially hard) proving dependencies between possible operations bases with non-self conflicting definitions?