There was a talk at the 2011 LLVM dev meeting (cached slides here <a href="http://webcache.googleusercontent.com/search?q=cache:http://llvm.org/devmtg/2011-11/Hines_AndroidRenderscript.pdf" rel="nofollow">http://webcache.googleusercontent.com/search?q=cache:http://...</a> , llvm.org is down today) about Renderscript's design philosophy and LLVM-based compilers.<p>In short, it's not an accident or incompetence that aspects of current desktop GPU execution models (e.g., thread blocks, scratchpad shared memory) are not exposed in Renderscript. It's a conscious decision to make sure you can get decent performance on not only those GPUs, but ARMv5-v8 CPUs (with and without SIMD instructions), x86, DSPs, etc. Getting good performance on these platforms from a language that does expose these constructs (e.g., CUDA) is still an open research problem (see MCUDA <a href="http://impact.crhc.illinois.edu/mcuda.aspx" rel="nofollow">http://impact.crhc.illinois.edu/mcuda.aspx</a> and friends).<p>Though Renderscript aims to achieve decent performance on a huge variety of platforms, even if they only cared about mobile GPUs, the major contenders (Imagination, ARM, Samsung, Qualcomm, NVIDIA) have wildly different architectures, and a language that is close to the metal on one will present a huge impedance mismatch on the others. Note that things are sufficiently different from desktop GPU design that we're just now seeing SoCs come out that support OpenCL (in hardware, driver support seems to be lagging), and you can't run CUDA on Tegra 4.