Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluateDirectDiffuse should not multiply by 1/PI when not integrated #3892

Open
etheory opened this issue Aug 19, 2024 · 5 comments
Open

evaluateDirectDiffuse should not multiply by 1/PI when not integrated #3892

etheory opened this issue Aug 19, 2024 · 5 comments

Comments

@etheory
Copy link

etheory commented Aug 19, 2024

float
evaluateDirectDiffuse()
{
const float PI = 3.141592654;
return 1.0 / PI;
}

I work for a studio where we have an issue that the default maya surface shaders look wildly different to the UsdPreviewSurface shaders in Maya. The UsdPreviewSurface shaders look a LOT darker.

The reason is an incorrect application of the hemispherical integral of the diffuse in your implementation.

You should not be multiplying by 1/PI for diffuse, since you are not integrating over the hemisphere like you would be if you were sampling in a raytracer.

Since you are point sampling, the 1/PI multiplication factor introduces an incorrect darkening.

The reason for this is that since you are not using a monte-carlo estimator, the PDF for diffuse is (1/pi) cos(theta), but in order to use this in a raytracer you do the following:

f = diffuseColor * (1/pi) * cos(theta)
pdf = (1/pi) * cos(theta)

E = f / pdf = (diffuseColor * (1/pi) * cos(theta)) / ((1/pi) * cos(theta)) = diffuseColor

See how the (1/pi) factor falls away when used in the form of the estimator (which a monte-carlo renderer uses)?

Since you are only point-sampling in your viewport engine, and you are therefore not using the pdf, you need to pre-divide by it to get an accurate result.

Could you please change this code to remove the (1/PI) factor in all implementations to make your result match a standard ray-tracer and the previous maya viewport shaders?

Thanks.

@JGamache-autodesk
Copy link
Collaborator

If you want to integrate UsdPreviewSurface shaded elements in a scene containing Lambert, Blinn, and Phong Maya shaders then you should definitely remove that scaling factor so the surface gets to reflect more light than what it receives, thus matching these legacy shading models.

I you want to use modern physically based shading, and mix UsdPreviewSurface items in a scene containing Standard Surface shaders (or any of the surfaces offered by MaterialX) then the normalization is correct. Since Maya wants to move to modern shading models, this is the UsdPreviewSurface we offer as default in MayaUSD.

In the following scene, all shaders have their base diffuse color set to (0.45, 0.45, 0.45) and are lit by the default light:
Shading_models_at_45_percent_grey

Feel free to submit a pull request with a brighter UsdPreviewSurface that fits your Lambert workflow, but please do not make it the default. I would recommend using an environment variable like MAYAUSD_PREVIEW_SURFACE_MATCHES_LAMBERT to enable the non physical version.

@etheory
Copy link
Author

etheory commented Oct 17, 2024

@JGamache-autodesk - the factor is wrong regardless, and if the above models use it, they are wrong also.

The factor is there to compensate for the integration over a hemisphere that a true raytracer does. But the VP2 viewport is a point-sampler, not a raytracer. It never sums rays over a hemisphere, so the factor shouldn't be there.

So your comment about generating more energy than it receives is simply wrong.

It is also easy to show mathematically:

  • cos is always < 1, the colour is always <1 hence cos * diffuseColor < 1 so there is no way to get more energy than you put in. It's literally impossible.

The factor should be removed. And if the modern PBR models are using it in the VP2 viewport, they are wrong.

Are you able to provide a scene to show that it generates more energy than you put in? I'd like to understand more why you think that factor is required.

Your image doesn't specify which space it is stored in (sRGB, linear etc.) and what the brightness of the light is (if it's >1 that would be the explanation, not the shader factor).

Thanks.

@etheory
Copy link
Author

etheory commented Oct 17, 2024

After some more thought, I feel I should expand @JGamache-autodesk:

The issue we have is that our artists are treating VP2 as if it can match a raytracer like Arnold or PRMan, which it clearly cannot.

Since VP2 ambient lights are additive (and not dome lights like our artists think they are) and all other VP2 lights are effectively point lights, which don't exist in most pathtracers (which use area lights) the issue is the different in look for cases where, for the sake or argument, you paint a red texure 1, 0, 0 in value, and light it with a directional light with an intensity of 1.

In a raytracer, you are right, the BSDF would be (1/pi) cos(theta) * color and the brightess would include that (1/pi) factor, so I am wrong there. Yes, it should remain.

However, to have any chance of getting some crude equivalent between VP2 and our offline raytracers I'd like to propose a different approach.

Would it instead be possible to have an environment variable that multiplies all the LIGHT INTENSITIES, (and not BSDF's) by pi, in order to achieve a result that more closely matches the artistic expectations or users? I mean it's much a muchness, but this feels (slightly) less hacky.

That way the BSDF math remains the same, but a point light in VP2 with unit intensity one unit away, will roughly match the brightness of an area light of area 1 unit (which is how most artists are working, and expecting results to match)?

Is this a more agreeable suggestion/solution?

After thinking more about the math, you are right, the factor is correct, since I was thinking in area light integrals, and not point-integrals (which we literally never use in our raytracer), and yes, if we DID have a point light, the BSDF (1/pi) factor would be present and it would behave as you show. My mistake was that we simply never HAVE this situation occur, and hence, there is all sorts of confusion on the artist side, treating VP2 as if it should produce equivalent results to our raytracer which is impossible, since it's not even close to an apples-to-apples comparison.

What do you think? Any other thoughts? Thank you.

@portsmouth
Copy link

portsmouth commented Oct 18, 2024

I take it the issue here is getting reasonable consistency between the viewport render and the offline render, given the lighting and material setup?

I understand Luke's point about the difference in the calculation. In the viewport there is no Monte Carlo sampling, so one has to effectively do the full integral over the hemisphere (or approximate it) analytically to get the output radiance given the (direct) lighting. I would agree that if, for example, the material is Lambertian diffuse (with albedo diffuseColor) and the lighting is a uniform dome (radiance Li), then the integrated result should be like

Lo = diffuseColor * Li

with no factor of $\pi$. While if the incident light came from a singular point light (i.e. an extremely small, powerful emitter), instead the integral would effectively have a delta-function, so the result would be something like

Lo = diffuseColor/pi * cos(theta_l) * P_l / (4.0 * pi * r_l*r_l);

where P_l is the point light power, r_l is the distance to the light, etc.
Similarly for a directional light, without the inverse square law fall-off. The case of a singular light could be tricky to get consistent will offline rendering, as it will depend somewhat on the normalization adopted for singular lights (e.g. Arnold may or may not account for the physically correct inverse square fall-off, depending on the light options, even for area lights).

I see though that the VP2 usdPreviewSurface code does have the factor of $\pi$ (the evaluateDirectDiffuse factor). That at least I don't understand (in agreement with Luke), according to the discussion above. Though possibly i'm not understanding the calculation being done in the VP2 code.

@etheory
Copy link
Author

etheory commented Oct 23, 2024

I take it the issue here is getting reasonable consistency between the viewport render and the offline render, given the lighting and material setup?

Precisely. For better or worse this is the artist expectation. And explaining to them that they are not area lights isn't going to stop them from complaining about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants