Hacker Newsnew | past | comments | ask | show | jobs | submit | mattpharr's commentslogin

Minimizing the ray payload for GPU was definitely part of why we didn't add that. (Though it does pain me sometimes that we don't have it in there.)

And, PBR being a textbook, we do save some things for exercises and I believe that is one of them; I think it's a nice project.

A final reason is book length: we generally don't add features that aren't described in the book and we're about at the page limit, length wise. So to add this, we'd have to cut something else...


How would one link to a physical object?

If this is what you’re asking: there are (perhaps too discreet) links at the bottom of each page to Amazon and MIT Press to purchase the physical book.


I think the geo URI scheme might work if you have an exact location for the book.


In the 4th edition, there's no support for RGB rendering--it's always and only spectral.

And admittedly the spectral rendering option in earlier editions wasn't great. We didn't always correctly distinguish between illuminants and reflectances, used a fixed binning of wavelength ranges (vs stochastically sampling wavelengths), and had a fine-but-not-state-of-the-art RGB -> Spectrum conversion algorithm. All of that is much better / state of the art in the 4th edition.


Nice work! This was a (small) wart in previous.

It's a great text, and remains one of my go-to examples when someone asks for examples of well executed technical book.

I was only ever peripherally involved in this area as a but found myself revisiting the origin just because it was put together so cleanly.


Oh, thanks a lot for the clarification! And thanks for the work on the book in general. :)

I need to look into the changes then. The spectrum implementation of the 3rd version served as inspiration for an X-ray raytracer I've been working on.


I've had a quick look in spectrum.cpp, but can't obviously find it: what uplifting method are you using now?

Is it something similar to Wenzel and Jo's one, or Alex Wilkie's Sigmoid variation?

Edit: ah: RGBSigmoidPolynomial...


Does any open source package like Blender support spectral rendering? Want to render dichroic glass :)


For Cycles, it’s in the works but it will take a while.

You can use LuxCoreRenderer with Blender, though!


I love the Watt and Watt "Advanced Animation and Rendering Techniques" book as well; it was IMHO the best book on those topics in its time.


I'm quite emotional about you writing this :-) Thanks for your work, it's really high quality in the same spirit!


The physical book was published 6 months ago, and per agreement with the publisher, the contents are now freely available online as well.

The book's blurb:

Photorealistic computer graphics are ubiquitous in today's world, widely used in movies and video games as well as product design and architecture. Physically based approaches to rendering, where an accurate modeling of the physics of light scattering is at the heart of image synthesis, offer both visual realism and predictability. Now in a comprehensively updated new edition, this best-selling computer graphics textbook sets the standard for physically based rendering in the industry and the field.

Physically Based Rendering describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation. A method known as literate programming combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The book's leading-edge algorithms, software, and ideas—including new material on GPU ray tracing—equip the reader to design and employ a full-featured rendering system capable of creating stunning imagery. This essential text represents the future of real-time graphics.

The author team of Matt Pharr, Greg Humphreys, and Pat Hanrahan garnered a 2014 Academy Award for Scientific and Technical Achievement from the Academy of Motion Picture Arts and Sciences based on impact the first and second editions of the book had on how movies are made. The Academy called the book a “widely adopted practical roadmap for most physically based shading and lighting systems used in film production.”

Of the book, Donald Knuth wrote “This book has deservedly won an Academy Award. I believe it should also be nominated for a Pulitzer Prize.”


Also related as far as the programming model: C* on Connection Machines and the C dialect used on MasPar systems in the 1980s.

There is some discussion in the ispc paper: https://pharr.org/matt/assets/ispc.pdf


> I would expect a shader could do a credible job of emulating all of these.

The key is that it is necessary to know the coherence of the light arriving at the surface in order for a shader to accurately model reflection. (See Figure 13 in the paper for an example that shows how this matters.) Otherwise a shader would have to make up a guess about the light's coherence.

The main contribution of the paper is a much more efficient approach than was known before for finding light's coherence, even in the presence of complex light transport (multiple reflections, etc), and one that further allows the application of traditional ray-optics computer graphics techniques for sampling light paths through the scene. (For example, previously if one wanted to track coherence, it was necessary to sample light paths starting from the light sources rather than from the camera, as is done in path tracing.)


The irony of such a post on a website with the domain name “y combinator”…


And a platform that is written in a lisp-language specifically created for hosting said platform, a language called Arc. The roots of lisp runs deep at HN.


Rejection sampling is not a good choice here.

First, the cost is 2 FMAs * the cost of generating 2 random numbers * the number of rejection sampling iterations. On average, you reject (4-pi)/4 ~= 0.25 of the time. However, if you're running on a GPU in a warp (or the equivalent) of 32 threads, then you pay the cost of the maximum number of rejections over all the threads.

The bigger issue is that a direct mapping from [0,1]^2 to the disk, as is described in this tweet, if you have well-distributed uniform samples in [0,1]^2, you get well-distributed samples on the disk. Thus, stratified or low discrepancy samples in [0,1]^2 end up being well distributed on the disk. In turn, this generally gives benefits in terms of error with Monte Carlo integration of functions over the disk.


You can have the threads cooperate, distributing from those that got extra to those that were unlucky.


FWIW that algorithm for uniform floating-point sampling in [0,1] is actually originally due to: Walker, Alastair J. “Fast generation of uniformly distributed pseudorandom numbers with floating-point representation.” Electronics Letters 10 (1974): 533-534.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: