-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluateDirectDiffuse should not multiply by 1/PI when not integrated #3892
Comments
@JGamache-autodesk - the factor is wrong regardless, and if the above models use it, they are wrong also. The factor is there to compensate for the integration over a hemisphere that a true raytracer does. But the VP2 viewport is a point-sampler, not a raytracer. It never sums rays over a hemisphere, so the factor shouldn't be there. So your comment about generating more energy than it receives is simply wrong. It is also easy to show mathematically:
The factor should be removed. And if the modern PBR models are using it in the VP2 viewport, they are wrong. Are you able to provide a scene to show that it generates more energy than you put in? I'd like to understand more why you think that factor is required. Your image doesn't specify which space it is stored in (sRGB, linear etc.) and what the brightness of the light is (if it's >1 that would be the explanation, not the shader factor). Thanks. |
After some more thought, I feel I should expand @JGamache-autodesk: The issue we have is that our artists are treating VP2 as if it can match a raytracer like Arnold or PRMan, which it clearly cannot. Since VP2 ambient lights are additive (and not dome lights like our artists think they are) and all other VP2 lights are effectively point lights, which don't exist in most pathtracers (which use area lights) the issue is the different in look for cases where, for the sake or argument, you paint a red texure 1, 0, 0 in value, and light it with a directional light with an intensity of 1. In a raytracer, you are right, the BSDF would be (1/pi) cos(theta) * color and the brightess would include that (1/pi) factor, so I am wrong there. Yes, it should remain. However, to have any chance of getting some crude equivalent between VP2 and our offline raytracers I'd like to propose a different approach. Would it instead be possible to have an environment variable that multiplies all the LIGHT INTENSITIES, (and not BSDF's) by pi, in order to achieve a result that more closely matches the artistic expectations or users? I mean it's much a muchness, but this feels (slightly) less hacky. That way the BSDF math remains the same, but a point light in VP2 with unit intensity one unit away, will roughly match the brightness of an area light of area 1 unit (which is how most artists are working, and expecting results to match)? Is this a more agreeable suggestion/solution? After thinking more about the math, you are right, the factor is correct, since I was thinking in area light integrals, and not point-integrals (which we literally never use in our raytracer), and yes, if we DID have a point light, the BSDF (1/pi) factor would be present and it would behave as you show. My mistake was that we simply never HAVE this situation occur, and hence, there is all sorts of confusion on the artist side, treating VP2 as if it should produce equivalent results to our raytracer which is impossible, since it's not even close to an apples-to-apples comparison. What do you think? Any other thoughts? Thank you. |
I take it the issue here is getting reasonable consistency between the viewport render and the offline render, given the lighting and material setup? I understand Luke's point about the difference in the calculation. In the viewport there is no Monte Carlo sampling, so one has to effectively do the full integral over the hemisphere (or approximate it) analytically to get the output radiance given the (direct) lighting. I would agree that if, for example, the material is Lambertian diffuse (with albedo
with no factor of
where I see though that the VP2 usdPreviewSurface code does have the factor of |
Precisely. For better or worse this is the artist expectation. And explaining to them that they are not area lights isn't going to stop them from complaining about it. |
maya-usd/lib/mayaUsd/render/vp2ShaderFragments/usdPreviewSurfaceLightingAPI1.xml
Lines 124 to 129 in 8391cd9
I work for a studio where we have an issue that the default maya surface shaders look wildly different to the UsdPreviewSurface shaders in Maya. The UsdPreviewSurface shaders look a LOT darker.
The reason is an incorrect application of the hemispherical integral of the diffuse in your implementation.
You should not be multiplying by 1/PI for diffuse, since you are not integrating over the hemisphere like you would be if you were sampling in a raytracer.
Since you are point sampling, the 1/PI multiplication factor introduces an incorrect darkening.
The reason for this is that since you are not using a monte-carlo estimator, the PDF for diffuse is (1/pi) cos(theta), but in order to use this in a raytracer you do the following:
f = diffuseColor * (1/pi) * cos(theta)
pdf = (1/pi) * cos(theta)
E = f / pdf = (diffuseColor * (1/pi) * cos(theta)) / ((1/pi) * cos(theta)) = diffuseColor
See how the (1/pi) factor falls away when used in the form of the estimator (which a monte-carlo renderer uses)?
Since you are only point-sampling in your viewport engine, and you are therefore not using the pdf, you need to pre-divide by it to get an accurate result.
Could you please change this code to remove the (1/PI) factor in all implementations to make your result match a standard ray-tracer and the previous maya viewport shaders?
Thanks.
The text was updated successfully, but these errors were encountered: