Computer Graphics II - HDR & Physically Based Rendering & Image Based Lighting & Shadows

 

Final Project Of The Term: HDR, PBR, IBL, Shadow Mapping (Bunch of Terms)

Hello again! This time we are concluding up the spring term of Computer Graphics II here in METU of 2022-2023. We'll be playing with high dynamic ranges (HDR's), physically based rendering (PBR), image based lighting (IBL) and shadow mapping techniques. It will be a good way to sum up the term! Let's begin, shall we?


Let's go step by step, and it will make much more sense what we are dealing with. We have many things at our hands at once to tackle, but there's no need to overload ourselves. Our first step will be HDR:


HDR


Here you can see an example of a lighting that is based on high dynamic ranging. In contrast to LDR, HDR allows you to have lights and bright spots on the image, far brighter than the usual amount. Thus, if you decide to have an object that is less shinier than a perfect metal, HDR's light sources will be much more persistent and create a more realistic lighting.


In order to work with HDR, first you need to have an image that supports it, but they are really common, so there's no need to worry there.

 

To work with them, I first created a new FBO, which now can use float values instead of integers. This way, intensity values of the pixels are now aren't limited to the usual 1.0, they can go even up to millions. Then, I have rendered everything to a texture instead of the screen, so I could do some post processing with it.

 
To get this image, I have rendered a rectangle to the screen. I forgot to normalize the uv coordinates to [0,1] and attach the FBO a depth buffer, so here's what you get if you forget such things like me.

After applying a tonemapping function to this image, which reduces HDR to LDR after everything is done, so that our screens can view it, the image is going to be much brighter and pleasant to look at. You can see how it got better in the next section.

Importing OBJ

While testing the HDR, I have spawned 25 little spheres and added lights on top of them. In order to work with spheres though, I first had to fix the OBJ importing function, because it neither allowed for blender exported normals nor texture coordinates. It was not that trivial though because OpenGL uses a different system for identifying vertices with texture coordinates. I fixed it by duplicating certain vertices and adjusting faces when there were duplicate texture coordinates attached to the same vertex. Here you can see the before and the after:


PBR - With Point Lights

 Naturally after the HDR is done, we want to render something so that we can utilize this new capability. Before implementing IBL though, we can try to grasp the fundamentals of physically based rendering, by trying to make certain materials, especially those with varying metallic and roughness degrees. I took a great amount of help by following the PBR tutorials on https://learnopengl.com/PBR/Theory , which I also used the code if it was directly mentioned on the pages under the PBR category. In order not to spoil ourselves though, I only allowed myself to work with my previous homework code, -HW2 to be exact- without directly taking code from somewhere else.

You can learn the theory better from the source above, but PBR boils down to some easy to grasp concepts:
  • It simulates reality, by using concepts such as conservation of energy. For example if diffuse coefficient is kD, then the specular coefficient is 1-kD, so that their sum is exactly 1.
  • An area of an object can get lit from all angles reachable from that point (diffuse).
  • Metallic property of an object directly changes specular and diffuse coefficients. If an object is metallic, then it only reflects the light, so that only specular is present on the object. However for practical purposes, a material can be somewhere in between full metallic and dielectric (non-metal).
  • Roughness property reduces the shininess of an object by considering the fact that direct reflection of a light ray is more randomized due to rough surface.
  • If an object is rough, then specular part is distributed but total light contribution is the same.
 etc....
 
In order to play with this, I have implemented the PBR code for point lights first, and imported a rusted metal ball material, which looks like this:

And here are plain red objects with varying metallic and roughness properties:

You can say that some of my time went to understanding how PBR works. However, point lights were great accompaniment along the way. But our main goal is working with IBL, so in the next chapters we will try to apply PBR concepts to our environment map.


-- Something went horribly bad at some point - I call this "haunted"

PBR IBL - Diffuse Irradiance


 In order to simulate lighting more correctly, we can use the help of the environment textures. The environment itself captures a lot of the lighting information by itself, so these two chapters are dedicated to working with image based lighting together with PBR. 

The diffuse part of the IBL, visualized with a special kind of blurred image is shown above. This texture captures the contribution of light for a surface normal, because for the surface, we can use it to gather information about light hit on that surface on about 180 degrees. Thus, this texture should be generated not by a generic blurring, but for every normal direction, sampling light from a hemisphere around that encapsulates 180 degrees, and write the average color in that pixel so that it can be sampled again by the normal vector when rendering the object. Note that if our image is an equirectangular projection, blurring will more or less do the same, but it won't be this accurate.

In order for the sampling to be faster, I have rendered the skybox into a cubemap, because I was directly rendering an equirectangular projection with a single image thus far. I couldn't figure out sampling in this space, so I first converted the HDR map into a cubemap, then sampled with 3d vectors. Then I used the formulas in the PBR article mentioned above, to be able to utilize our new diffuse irradiance map. 

Below you can see two images, that shows how much better we can make our spheres render visually just by adding a diffuse irradiance map to them:

 

But, as with things involving convolutions and sampling, size matters. For some environments involving very bright lights in narrow spaces, sampling less than the required amount may amount to images with weird bright points like this:

 
 
Luckily though, I could get around this problem not by increasing the sample size which is not a good way to deal with it, but by using mipmaps to sample the environment map whenever needed, which was luckily handled automatically by OpenGL.

Also I had a problem where one side of the spheres were just plain yellow/white, which I solved by limiting the brightness to 10000. It was surprising that the scene contained lights that were brighter than this value.

PBR IBL -  Specular: Prefilter Map

We will do the same thing for the specular side of things, but there's a big difference now, that we don't sample from points around the hemisphere, but from a really narrow angle that is reflected off of the surface. To simulate various amounts of roughness degrees, we need to sample from randomized reflected directions, which is visualised as such:


 In order to accomplish this, we will be using multiple mipmap levels, each rendered manually, with increasing roughness. Because when the roughness is increased, there will be less need for high resolution, and this way we can use the trilinear interpolation feature of OpenGL when we have a roughness level between two mipmap levels. I'm using a distribution function taken from the specular chapter of the article above, which yields results like this for roughness=0, roughness=0.2 and roughness=0.8 respectively:


 

As you can see, really bright lights cause a problem if you don't deal with them. However, these images were rendered without mipmaps, and in the following image you can see how drastic of an effect it can cause, fixing nearly all our issues:

(roughness=0.8)

However, I don't know if it is the mipmap, or the uv texture coordinates, but I imagine that because the sphere in this scene only has 1 vertex on its north pole and many texture coordinates attached to that same vertex, it is causing a rendering defect, barely noticable, but present nonetheless.

And this is how it looks when you don't render any mipmaps, causing edges to be black when you let OpenGL decide the level of mipmap:


But, in the end, it all turns great, and here you can see the results really shine:





IBL Results






 

Hemisphere Projection

Because our environment map was rendering practically at an infinite distance, when we rotated the screen it wasn't rotating correctly. This can be explained due to the fact that closer objects move faster because of something called the parallax effect. There's no need to worry though, for the purposes of this homework, we don't need to deal with a mathematical projection of another kind, we can simulate it with a dome -- aka a hemisphere. 

I couldn't find a hemisphere online though, so I downloaded blender and made it myself. Here are roughly the steps I took:

  • Add a UV Sphere and attach a material with an equirectangular projected image as its base texture.

  • Invert each face's normal vector and enable backface culling so that we see hemisphere's further end.

 

  • Flatten the vertices by scaling the z coordinate to be 0 and move it upwards until z coordinate equals 0 as well.

 After these steps we had a perfect hemisphere, but it turns out that generating the hemisphere this way leads to wrong projective results, especially at the hemisphere corners:

 

You can never have a perfect projection because the camera that took the image was held stable at one point only. Thus it means that only at that point the projection will look correct. But we can still make it look good just by changing the geometry a bit, smoothing the corners and scaling the center by about a logarithmic factor. I did this by hand though because it was good enough. I followed the ground and made sure that at the center, the ground had straight lines. It can be seen in the shanghai map in the project executable, that floors are somewhat straight when viewed at the starting point.

Here is the final result of the projection/hemisphere rendering, as can be seen from the village map:

 

Shadows

 Here comes the last step of our project, shadows! Shadows are notoriously cumbersome to deal with, or let's say to get them to be perfect, but we will do our best to implement a good looking shadow without implementing an advanced algorithm. We will implement the most straightforward of them all - Shadow mapping.

It will be easier to explain this on the way to the last render, but the only thing you need to know is that shadow mapping works by rendering from the point of view of the light source(s) of the scene. Using IBL's, it is possible to capture many lights, where we can reduce the image into many directional lights, but in this project we will implement a single source of light. Our goal from the start was to have a program that is performant and is pleasant to look at, which we aimed to look similar to the demo of the unreal engine HDRi environments, and they had only 1 shadow caster. Thus the alternative approach of IBL's won't be implemented, also due to time reasons, but also because we can change the specular coefficient by first calculating the shadows, and then re-rendering the scene with PBR shaders, this time reducing the speculars of objects if there are shadows. Note that reducing speculars this way isn't fully correct, but it leads to good enough results so it was implemented this way in this homework. The algorithm however is designed to withhold multiple light sources, just by combining results of them in the shadow map (not depth map). We'll come to that later.

So in order work with shadow maps, we need to depth buffers, one from the camera, and one from the light source. Reading the depth buffer of the camera is relatively easy since we are already rendering to an intermediate texture because we have postprocessing. Here is an example of the depth buffer we get from the camera:

 

Getting the depth buffer of the lights is also not that hard, we just need to generate an orthographic projection for the light to use, and render the scene that way, looking at the center of the world:

 

The way we chose to utilize these buffers, are kind of like deferred rendering, in which we can combine the results of shadow computations by only having the information of these textures. So if we have multiple lights, we only need to render from the perspective of the new light and not the camera again. Also this way we can add the shadows as the last step, just before tonemapping in the postprocessing shader, just by combining the color buffer of the final render and shadow texture.

The actual rendering of the shadow texture however, needs some tricky way of thinking about screen space. Because we have only the depth buffers, that means we only have access to screen space coordinates. But it is okay since we also have our projection and view matrices, for both the light and the camera. This means that we can multiply the screen space with the inverse of projection and view matrices, and get the world coordinate. Here are the world coordinates of the scene as an example, attained this way:

 

We can further utilize these coordinates by multiply with the lights projection and view matrices, and get the screen space coordinates of the light. 

 

Getting these are useful because we can now sample the depth buffer of the light with these coordinates, and get the depth info of our world coordinate. Because we also know the screen space of the point of interest though, it means that we can just compare the two and determine if one is further than the other. Basically, it means that lights depth map helps us know how far light reaches, and using that information, we can finally draw the shadows!:

 

Oops, maybe not quite, because it is really hard to look at the previous picture. It is a simply fix though, it is caused by the small errors of the depth map, which is called shadow acne. I won't go into details, but can be fixed really easily just by adding a bias to depth comparison, getting us this result:

 

Voila!

It is not quite perfect yet, and it will never be, but there are things left to do. For example, in order to make edges render shadows better, we can increase the light depth map resolution, which is easy enough to do. We can also sample the points around when we are determining the shadows, which is called PCF. I again won't go into details since the information is widely accessible, so let's look the results now, before and after:

 

Because we use the shadow results both in the PBR shader and the post processing stage, it is possible to reduce the specular shine on the armadillo when covered by shadows, and also have shadows on areas like the floor that don't render using the PBR technique:

And it concludes the shadow chapter as well! Let's finish it by taking a glance at the final process.


Final Render

After we covered each aspect of this homework individually, we can now take a look at the final rendering pipeline as pseudo-code:

 

main loop:

     if envmap is new:

        draw 6 sided environment cubemap from equirectangular projection

        draw diffuse irradiance from env cubemap

        draw specular prefilter from env cubemap for each mipmap level

    for each light:

        draw each object using a basic shader to capture depth map from lights pov

    draw each object using a basic shader to capture depth map from camera pov

    draw a rectangle using shadow shader using light & camera depth maps        

    draw each object as usual from camera pov, disabling specular if there's shadow

    render to screen using a postprocessing shader that does tonemapping   

         

Last Thoughts

It's been an incredible journey! I want to thank all of you that made this journey possible, especially our teachers Ahmet Oğuz Akyüz and Kadir Cenk Alpay.  You've made it extremely enjoying to be taking this course, where it didn't feel like one at all. 

Happy holidays to you and everybody!

 

Take care,

İlker

Yorumlar

Bu blogdaki popüler yayınlar

Computer Graphics II - Clouds

Computer Graphics II - Bezier Surfaces