To Matt Pharr, Greg Humphreys and Pat Hanrahan for their formalization and reference implementation of the concepts behind physically based rendering, as shared in their book Physically Based Rendering. Physically based rendering has transformed computer graphics lighting by more accurately simulating materials and lights, allowing digital artists to focus on cinematography rather than the intricacies of rendering. First published in 2004, Physically Based Rendering is both a textbook and a complete source-code implementation that has provided a widely adopted practical roadmap for most physically based shading and lighting systems used in film production.
I believe that this is the first time that an Academy Award has been given to a book. And this (ten-year-old) book was written in a way that deserves to be better known and more widely imitated.
The publisher's blurb for Physically-based rendering: From theory to implementation:
From movies to video games, computer-rendered images are pervasive today. Physically Based Rendering introduces the concepts and theory of photorealistic rendering hand in hand with the source code for a sophisticated renderer. By coupling the discussion of rendering algorithms with their implementations, Matt Pharr and Greg Humphreys are able to reveal many of the details and subtleties of these algorithms. But this book goes further; it also describes the design strategies involved with building real systems—there is much more to writing a good renderer than stringing together a set of fast algorithms. For example, techniques for high-quality antialiasing must be considered from the start, as they have implications throughout the system. The rendering system described in this book is itself highly readable, written in a style called literate programming that mixes text describing the system with the code that implements it. Literate programming gives a gentle introduction to working with programs of this size. This lucid pairing of text and code offers the most complete and in-depth book available for understanding, designing, and building physically realistic rendering systems.
Literate programming is an approach to programming introduced by Donald Knuth in which a program is given as an explanation of the program logic in a natural language, such as English, interspersed with snippets of macros and traditional source code, from which a compilable source code can be generated.
Donald Knuth's own explanation, from the 1984 article where he first laid out the idea:
The past ten years have witnessed substantial improvements in programming methodology. This advance, carried out under the banner of “structured programming,” has led to programs that are more reliable and easier to comprehend; yet the results are not entirely satisfactory. My purpose in the present paper is to propose another motto that may be appropriate for the next decade, as we attempt to make further progress in the state of the art. I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature. Hence, my title: “Literate Programming.”
Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.
Although 1984 was 30 years ago, and Knuth's idea is obviously a good one, its impact has been surprisingly limited. The proportion of the programs written last year that were implemented in a "literate" mode is, to several significant digits, zero.
The best-known and most widely used systems now used for Literate Programming are Sweave and Knitr, both providing support for reproducible research in R. The application to reproducible research shifts the goal a bit — instead of "explaining to human beings what we want a computer to do", the aim is to "explain to human beings what we made the computer do in running the analyses or simulations under discussion, so as to allow them to reproduce and extend our work".
In other words, a Knitr document is typically not a program at all — though it can be executed. Rather, it's a scientific or technical paper, which happens also to include the code and data needed to reproduce the claimed results, in a way that makes the relationship to the paper's numbers and tables and graphs completely transparent. The advantages to readers, to authors, and to the field at large are obvious.
As I interpret the publisher's description, Physically Based Rendering is actually a third kind of thing: a tutorial or textbook that is also transparently executable.
It's common for tutorial texts to include code examples — see e.g. the NLTK Book for some excellent examples. These are typically snapshots of interactive sessions, including what the computer prints out as well as what the user types in:
And there is often source code available online, of course. Is there a significant advantage to full "literate programming" in such tutorial explanations? Maybe not; which might help explain why the idea hasn't had a greater impact.
But still, every time I pick up a project that I've put aside for a few years (or months, or weeks), I find myself wishing that I'd been using a "literate programming" mode from the beginning, rather than just a set of notes about what I did and how I did it.