When available, it's a good idea to use the F16C/CVT16 instruction set [0] for converting between single (32 bit) and half (16 bit) precision. I think ARM NEON has a comparable instruction set. They operate on SIMD registers, so if you're using SIMD, there might be substantial benefits in using them.<p>The actual arithmetic is still done in 32 bit precision and the conversions are only done when loading from or storing to memory. Some GPUs actually have 16 bit arithmetic internally, but most use 32 bit ALUs and just convert on load/store.<p>Alternatively, if you don't care about NaNs and denorms, and whatnot (e.g. the use cases for 3d model data mentioned in the README don't really involve NaNs or denorms), some simple bitshifting can do the conversion.<p>Here's a snippet of Python code that I've used:<p><pre><code> def float2half(float_val):
f = unpack('I', pack('f', float_val))[0]
if f == 0: return 0
if f == 0x80000000: return 0x8000
return ((f>>16)&0x8000) | ((((f&0x7f800000)-0x38000000)>>13)&0x7c00) | ((f>>13)&0x03ff)
def half2float(h):
if h == 0: return 0
if h == 0x8000: return 0x80000000
f = ((h&0x8000)<<16) | (((h&0x7c00)+0x1C000)<<13) | ((h&0x03FF)<<13)
return unpack('f', pack('I', f))[0]
</code></pre>
[0] <a href="https://en.wikipedia.org/wiki/F16C" rel="nofollow">https://en.wikipedia.org/wiki/F16C</a>