Here is a summary I wrote on my attempt to render balls using only a fragment shader. I am pretty happy with the result, and I've included a breakdown of the creation process.
In the 90's, texture maps were a luxury for anything. As a result shaders (for offline rendering, realtime shading languages didn't exist yet) were doing everything procedurally.<p>Here is the only image I could find of the RenderMan Shading Language VIDI SPORTS SHADERS Makina Works sold at the time (you got no source, just bytecode understood by Pixar's implementation at the time, AFAIK):<p><a href="https://web.cs.wpi.edu/~matt/courses/cs563/talks/renderman/SportsBalls.jpg" rel="nofollow">https://web.cs.wpi.edu/~matt/courses/cs563/talks/renderman/S...</a><p>What is interesting here is that the shaders did everything on a sphere primitive.
All RenderMan implementations at the time (there were more than just Pixar's) were micropolygon renderers. I.e. displacement was almost free in terms of compute overhead.<p>Even the football operated on a sphere that the shader deformed into the resp. shape. All patterns, stripes etc. were procedurally generated. Each shader had texture slots to apply custom logos.
This reminds me of the same rendering technique described in Hustle Kings for the PS3. There was a write up done, however shader code was not included.<p><a href="https://web.archive.org/web/20130913033312/http://www.voofoostudios.com/?p=33" rel="nofollow">https://web.archive.org/web/20130913033312/http://www.voofoo...</a>
It might be interesting to explore with the students why this would not be quite right for a general 3D view, and how it could be fixed. Conic sections, etc.
I guessed correctly by the title that the article is about coding a 3d image, but was waiting for the classic "two balls and a cylinder" scene, which college students naturally come up with in path tracing assignment in computer graphics courses.
I'm just getting my feet wet with shaders myself. Could this fragment shader be used in a "real" 3D game? At the moment, the sphere is orthographicly projected onto the screen so the sphere will always appear as a circle rather than an ellipse. How easy would it be to use this shader in place of a spherical mesh as you might find in a typical geometry based 3D render? I.e. could you use the same technique and render multiple balls on the screen with positions in 3D space and rendered correctly taking into account any distortion caused by the camera?
Even after 10 years I still have anxiety looking at OpenGL code. I still don't understand why we needed to write ray-tracing during a BSc course starting from a blinking cursor. The professor tried to defend that at the time, but I firmly believe that it is the wrong curriculum for a 20 year old CS student. Computer graphics were and are niche programming, and should not be taught main-stream. Out of 100 graduates maybe one will ever use this, why not use some common sense and teach something useful?
Very nice! not sure if you've heard of [1] ShaderToy, but it's pretty good for sharing/viewing shaders online.<p>[1] <a href="https://www.shadertoy.com/" rel="nofollow">https://www.shadertoy.com/</a>
I happened to notice that PI is #define'd to 3.1415926538, which isn't the correctly rounded off value of 3.14159265358...<p>Seems like they missed a '5'.
From the HN guidelines: <i>please use the original title, unless it is misleading or linkbait; don't editorialize.</i><p>The correct title is <i>Rendering Pool Balls</i>, not "Rendering my balls in a fragment shader".
Fyi.<p>"Rendering <i>my</i> balls" says something very different to "rendering balls".<p>Although the hair would be impressive if you managed to do it.<p>(Testicles btw)