The quest for realism in computer graphics has had an exciting history. We have seen an evolution from crude pictures made on expensive research machines to compelling images made on desktop computers. This quest has been one of the focal points of my professional life. It has been a fun and rewarding twenty years.
Back in the early days of computer graphics we were happy to get any pictures at all. The objects had jagged edges and looked like they were made out of plastic, but we were full of ideas that could make them look better. Our goal was to make the images look real.
Most of the early developments were at the University of Utah, where Dave Evans and Ivan Sutherland started a remarkable computer graphics program that attracted people from all over. In addition, Utah had a notable image processing group led by Tom Stockham. The first major task was to develop algorithms for determining what was visible in a scene, which led to the famous paper analyzing hidden-surface algorithms by Ivan Sutherland, Bob Sproull, and Bob Schumacher. My own contribution here was the development of the Z buffer.
Better shading was the next major problem. Henri Gouraud discovered that he could substantially reduce the faceted appearance of objects by linearly interpolating the values at the vertices of the polygons. Bui T. Phong took it a step further by interpolating the surface normals to get a smoother appearance as well as highlights. I made a contribution by develop-
ing texture mapping and the display of cubic patches. It was only at this point that I felt pictures were starting to look realistic. Jim Blinn went on to extend texture mapping to include perturbation of surface normals, producing stunning pictures of oranges and strawberries. We had reached the point where the calculations required to shade became greater than those required to determine visibility.
As computer graphics developed, major advances began to come from other places. I left Utah to become the director of the Computer Graphics Lab at the New York Institute of Technology, where researchers from all over the country came to develop realistic images and animation. During the late 70's Cornell grew to be a major computer graphics institution under the direction of Don Greenberg. It was there that Rob Cook discovered that the lighting model everyone used best approximated plastic. He proposed a different model that let us simulate other surfaces such as metals.
By the early 80's realism fever began to catch hold. Turner Whitted made ray tracing popular and thus started a cavalcade of advances in ray tracing. At first the pictures took extraordinarily long to produce—sometimes days. But again new algorithms were developed that made ray tracing more efficient. Around the same time SIGGRAPH became a phenomenon. Each year the proceedings had some great new pictures: fractals, plants, radiosity, cloth, and so on. Some criticized the emphasis on `pretty pictures,` but the excitement was undeniable. Every time someone made a new discovery it would be shown to colleagues, resulting in raves and applause, and causing others to redouble their efforts to do even better.
In 1979 Alvy Ray Smith and I had joined Lucasfilm to start the Computer Division, which later became Pixar. Some of the best people from the above institutions gathered to try to make images so realistic that they could be used in live-action motion pictures.
We decided to throw the book out and start all over again in defining a system for creating extremely complex pictures with a high degree of realism. We had a phenomenal team— both Loren Carpenter and Rob Cook were amazingly productive and creative. Rodney Stock suggested dithered sampling
and Tom Porter suggested spreading the samples over time. Rob generalized the shading formulas to `shade trees;` Pat Hanrahan and Jim Lawson generalized this to a shading language.
The pictures were starting to look amazingly good.
And just as important, we now understood what it meant to describe an image. This opened the door to the definition of an interface that could be independent of algorithms, hardware, and speed of execution. Pat Hanrahan put all that we knew about geometry, lighting models, ray tracing, antialiasing, motion blur, and shade trees into a compact interface, which he named RenderMan.
Our goal wasn't just to make photorealistic pictures; it was also to make the tools and systems that would let thousands of people create pictures of whatever they chose to design. Two more things were needed. Mickey Mantle, Tony Apodaca, Dar-wyn Peachey, and Jim Lawson took research code and made software that met the RenderMan specification. Finally, Steve Upstill took responsibility for explaining it all in this book.
The quest for better pictures will continue, of course, but we have reached a new era. I am proud to be associated with the team that has made the results of twenty years of research available to everybody.
Ed Catmull May 1989
The RenderMan interface is meant to be the PostScript of 3-D graphics. Just as PostScript allows a desktop publishing system to pass page representations to a printer, RenderMan allows three-dimensional modeling systems to pass scene descriptions to a renderer. The design of the RenderMan interface has been the result of a great deal of experience designing and implementing rendering systems.
In 1981 Loren Carpenter wrote the first rendering system at Lucasfilm. He named it REYES, which stood for Renders Every-
thing You Ever Saw. After its use for the Genesis effect of Star Trek II, the Wrath of Khan, Rob Cook and Ed Catmull set out with Loren to redesign it to produce film resolution pictures of typical naturally occurring scenes. They estimated that a model of a natural scene would require 80,000,000 polygons, a level of complexity far beyond the capabilities or even aspirations of existing systems. This goal forced them to rethink every aspect of the rendering process.
Another fundamental goal of the Lucasfilm group was to avoid digital artifacts in image production. The most troublesome were those due to spatial aliasing (jaggies), but they came to realize that temporal aliasing was responsible for strobing artifacts during animation. A practical motion blur algorithm was therefore needed. A friendly competition among Ed, Loren and Rob culminated in Rob's discovery of stochastic sampling, which led to practical solutions for spatial antialiasing, motion blur, depth of field and a variety of other effects. One of the most challenging aspects of the RenderMan design was the goal to allow control over these effects, and image quality in general.
During this period Rob also invented shade trees, which Jim Lawson and I have enhanced to be a complete shading language. The observation that modeling the optical properties of real materials requires the full generality of a programming language is perhaps the most important aspect of RenderMan, and one which distinguishes it from other graphics interfaces, which are usually based on a single large parameterized shading model.
It was painfully obvious that to routinely generate pictures containing 80,000,000 polygons in a reasonable amount of time would require special-purpose hardware. Tom Porter, Adam Levinthal, Mark Leather and Jeff Mock set out to design a large-scale machine, named the REYES machine, to render these types of pictures. The prospect of the REYES machine led to the need for a standard interface between the scenes being produced by a modeling system and accepted by the rendering system. This interface is RenderMan.
Bill Reeves and I designed the first version of RenderMan. Most notable was the fact that the interface was built around
curved-surface primitives. It was thought crucial that the modeling program not convert these into polygons, because this would lead to geometric artifacts (polygonal silhouettes) and shading artifacts (Mach bands) in the final images. In fact, rendering curved surfaces had always been an active area of research at Lucasfilm and Pixar, beginning with Tom Duff's quadric and Loren Carpenter's bicubic patch rendering algorithms. Of course, the process of choosing the exact set of primitives caused many heated arguments. In the end, we allowed for primitives that rendering systems could be expected to handle directly, without decomposition.
Our initial specification was then circulated internally for an extensive design review. Tony Apodaca, Loren Carpenter, Ed Catmull, Rob Cook, Charlie Gunn, Paul Heckbert, Jim Law-son, Sam Leffler, Mickey Mantle, Eben Ostby, Darwyn Peachey, Tom Porter, Bill Reeves, and Alvy Ray Smith all participated in these sessions at Pixar. During these discussions the style and content of the interface was largely decided.
During this time we also began a joint project with Silicon Graphics to work on a three-dimensional graphics library usable for both interactive graphics and high-quality rendering. Jim Clark contributed his thoughts on a simple graphics library and Dan Baum, Paul Haeberli, Allen Leinwand, and Rob Myers at SGI contributed to the design. A major goal of this joint collaboration was to insure that the interface was compatible with emerging trends in real-time graphics workstation hardware.
This version was then circulated to approximately 20 companies for critical review. Included among these were companies specializing in architectural and mechanical CAD, animation production, and graphics workstations. Tom Porter and I personally met with graphics researchers at many of these companies and collected their thoughts and criticisms. Particularly helpful comments came from Andy Van Dam, David Laidlaw, and Jeff MacMann at Stellar; from Peter Schoeler and Gavin Miller at Alias; Kevin Hunter at Symbolics; Roy Hall at Wavefront; Doug Kay, George Joblove, and Lincoln Hu at Industrial Light & Magic; Michael Schantz, Lewis Knapp and Eileen McGinnis at Sun; and from Larry Gelberg and Tom Stephenson at TASC.
I was the chief architect and was responsible for incorporating the comments of the above individuals. During the final design period Tony Apodaca was always available to discuss various design alternatives. It was very difficult to pass a bad idea by him, but if I did, I alone am to blame.
Pat Hanrahan May 1989