Suppose you’re a moderately experienced programmer and you want to learn about graphics. After wandering the Internet for a while it becomes obvious there are two distinct disciplines when it comes to rendering. The first is performance oriented and founded upon projecting and rasterizing geometry. Applications using tech on this side of the fence are generally interactive and the time to render a frame is restricted to less than 30 milliseconds. The second discipline is quality focused and founded upon tracing rays. On this half of the field computational expense is a second rate concern and the time to render a frame can range from a few minutes to a few days.
Let’s say you go the real-time route and hit the Internet in search of tutorials and API documentation. Here’s what a few hours and 1,000 plus lines of Win32 and DirectX code get you:
That’s right: A window with a black client area. Sure, it’s redrawing at 60Hz, receiving messages from the OS and maybe it even reacts to resize events properly. The trouble is you can’t show any of that to a non-programmer and have them understand you did something meaningful. When starting from scratch it takes a solid week of toil to get pixels on screen and prove you aren’t perpetually twiddling your time away. Such an investment is a huge discouragement to a sprouting graphics programmer. If you opt for a ray tracing approach the same amount of code will net you something much more impressive:
Editing a mere three lines of the same program yields a remarkably different (in terms of synthesized phenomena, not visual features) and equally interesting image:
Much to my astonishment I was able to generate the previous images from scratch in a few hours without using any cumbersome APIs (I’m looking at you DirectX and Win32) or consulting the Internet. An additional day of tinkering led to the following two images:
In college my Computer Science peers where always fiddling with ray traces while I was off experimenting with shadow volumes:
and extracting assets from a leaked Doom III demo:
The reason for the difference in our extra curricular activities never donned on me until now. Getting a basic ray tracer up and running is easy and early images can be impressive. Getting an interactive graphics engine off the ground is tedious and early images are ho-hum. The same contrast extends beyond getting started; for example, shadows are an implicit feature of any ray tracer (they just happen as a consequence of the algorithm) while a real-time system must explicitly compute shadowed regions. This dichotomy can be generalized to a theme seen over and over in software engineering. An elegant algorithm that’s concise and hugely flexible is often times exorbitantly slow at solving the problems deployed to it. Cracking the same class of problems can be done much more efficiently if you are willing to develop a sprawling software system that explicitly deals with highly specific problem cases.