Adventures in Ray Tracing
Tuesday 19 July 2011
Concluding Remarks
The course is nearly finished and my ray tracer is nearly done. I did not get around to completing the spacial subdivision objective, so that means a maximum of 9/10 for the objectives. It's not a perfect program, and it definitely has some quirks, but I sure am proud of what I've accomplished. I've done my best to polish it up into a decent, user-friendly application. I hope my demo goes well!
Final Scene
My final scene consists of some marble pillars near some trees with mountains in the background. It is complete with a shining moon and a starry sky.
Here's the original image with no anti-aliasing:
And once more with anti-aliasing:
I ran it one more time after I implemented simple shadows (finally) but for some reason, I get funny lines on the right side of the screen. If it weren't for those, it would look much cooler.
There's also the slight anomaly of the moon casting a shadow on the sky. It's kind of silly, but I guess it gives the scene a sort of 'diorama' look?
I'm not sure how "interesting" it is, but I like it. It makes heavy use of texture mapping, cones and cylinders. Every object is texture mapped.
Here's the original image with no anti-aliasing:
And once more with anti-aliasing:
I ran it one more time after I implemented simple shadows (finally) but for some reason, I get funny lines on the right side of the screen. If it weren't for those, it would look much cooler.
There's also the slight anomaly of the moon casting a shadow on the sky. It's kind of silly, but I guess it gives the scene a sort of 'diorama' look?
I'm not sure how "interesting" it is, but I like it. It makes heavy use of texture mapping, cones and cylinders. Every object is texture mapped.
Multi-threading
This was probably the most frustrating objective to complete... and I'm not all that sure it's complete. The program now allows the user to chose 1, 2, 4 or 8 threads to render the image. This speeds up the rendering time dramatically, but only if your computer has at least a number of cores matching the number of threads selected.
Below is a graph showing the time it takes to render a simple 256x256 image with 5x5 grid sampling:
This was tested on a 4-core machine, so there are no results for 8 threads. It's pretty obvious that more threads is awesome.
A fun byproduct of mutli-threading is that you can see different sections of the image partly drawn when you cancel a render. Here's an example of an image partially drawn with 4 threads:
A not-so-fun byproduct of multi-threading the unreliability. It's difficult to make a thread-safe program. My ray tracer has some undiscovered threading issues, so it will occasionally seg-fault. Ah well. I did my best.
Below is a graph showing the time it takes to render a simple 256x256 image with 5x5 grid sampling:
This was tested on a 4-core machine, so there are no results for 8 threads. It's pretty obvious that more threads is awesome.
A fun byproduct of mutli-threading is that you can see different sections of the image partly drawn when you cancel a render. Here's an example of an image partially drawn with 4 threads:
A not-so-fun byproduct of multi-threading the unreliability. It's difficult to make a thread-safe program. My ray tracer has some undiscovered threading issues, so it will occasionally seg-fault. Ah well. I did my best.
Fish Eye (Wide Angle) Lens
The "fish eye" or wide angle lens allows an image show an increased field of view by projecting the image into a sphere. This is best explained with a picture:
Here we see a sphere in a box. There are some other primitives, but they are far off to side and we can only glimpse a corner of each.
When we use the fish eye lens, we get this:
Suddenly the shapes that were to far outside the field of view are now visible. Note that the sphere looks perfectly normal, although farther away, while the objects on the sides look curved around the edges of a sphere. I added a small fade to black around the edges for effect. The above image was rendered with 40 degrees field of view. This following one is the same image, only with 80 degrees field of view:
Here we see a sphere in a box. There are some other primitives, but they are far off to side and we can only glimpse a corner of each.
When we use the fish eye lens, we get this:
Suddenly the shapes that were to far outside the field of view are now visible. Note that the sphere looks perfectly normal, although farther away, while the objects on the sides look curved around the edges of a sphere. I added a small fade to black around the edges for effect. The above image was rendered with 40 degrees field of view. This following one is the same image, only with 80 degrees field of view:
Depth of Field
Depth of field is the added effect of focus. By focusing on a particular distance, objects nearer and farther away begin to blur. This is done simply was distributed ray tracing and adding a "disturbance" to each ray. Each rays point of origin
is shifted slightly based on the disturbance, which is a random offset. The offset increased as the distance between the destination of the ray and point of focus increases. Since the destination is required to compute the disturbance, depth of field takes longer to render than an image with no focal point, since the destination must be calculated beforehand.
Here is a simple image with a bunch of spheres at various depths:
Now, here is the same image, only focusing on the nearest red ball:
Same image again focusing on the green ball in the middle:
Same image once more focusing on the farthest white ball:
is shifted slightly based on the disturbance, which is a random offset. The offset increased as the distance between the destination of the ray and point of focus increases. Since the destination is required to compute the disturbance, depth of field takes longer to render than an image with no focal point, since the destination must be calculated beforehand.
Here is a simple image with a bunch of spheres at various depths:
Now, here is the same image, only focusing on the nearest red ball:
Same image again focusing on the green ball in the middle:
Same image once more focusing on the farthest white ball:
Texture Mapping
Texture mapping was another objective that seemed complicated at first. The idea is to map the point on an object in 3-space to a 2-dimensional image and colour the point with the corresponding pixel in the image. I started with spheres, and again, it took a long time to actually get the math right.
After spheres, I did planes, which is the easiest primitive to map onto. Simply repeat the texture over and over again from a starting point out.
Other primitives where giving me trouble. Cones in particular were tricky. Then I discovered sphere-mapping. This is a method of texture mapping analogous to shrink wrapping. Simply project the texture onto a sphere, and then map the sphere onto any object using the radial distance from the center. The same method works for any bounded primitive (i.e. not planes).
Here is an example of texture mapping. A world map is texture-mapped on a cylinder and a box. The cone has the texture of a pine tree, and the sphere has a moon texture. The plane is textured with grass.
After spheres, I did planes, which is the easiest primitive to map onto. Simply repeat the texture over and over again from a starting point out.
Other primitives where giving me trouble. Cones in particular were tricky. Then I discovered sphere-mapping. This is a method of texture mapping analogous to shrink wrapping. Simply project the texture onto a sphere, and then map the sphere onto any object using the radial distance from the center. The same method works for any bounded primitive (i.e. not planes).
Here is an example of texture mapping. A world map is texture-mapped on a cylinder and a box. The cone has the texture of a pine tree, and the sphere has a moon texture. The plane is textured with grass.
Cel-Shading
Cel-shading has been my favorite objective. It was the 3rd one I actually completed and I really like the way it looks. The method is fairly simple. Colours are flattened out into a set number of intensities based on the normal of the light hitting an object. This provides the impression of colouring objects in with a pencil crayon using flat shading (no gradients).
To get the black lines is a little trickier. A temporary image is first generated. This image colours each object with a flat shade of grey. Each object is given a different shade. Then, using the Sobel operator, a change in intensity is measured at each pixel. If the intensity of a colour changes, that means the edge of an object is at that pixel.
A giant grid of 1s and 0z is then produced. A 1 means there is no edge at the corresponding pixel and a 0 means there is an edge. When the actual image is rendered, each pixel colour is multiplied by it's corresponding grid in the cell. So, if there is an edge at a particular pixel, it will be multiplied by 0, resulting in the colour black, otherwise the calculated colour is used.
The following is a simple image rendered normally:
Here, we have the same image cel-shaded!
To get the black lines is a little trickier. A temporary image is first generated. This image colours each object with a flat shade of grey. Each object is given a different shade. Then, using the Sobel operator, a change in intensity is measured at each pixel. If the intensity of a colour changes, that means the edge of an object is at that pixel.
A giant grid of 1s and 0z is then produced. A 1 means there is no edge at the corresponding pixel and a 0 means there is an edge. When the actual image is rendered, each pixel colour is multiplied by it's corresponding grid in the cell. So, if there is an edge at a particular pixel, it will be multiplied by 0, resulting in the colour black, otherwise the calculated colour is used.
The following is a simple image rendered normally:
Here, we have the same image cel-shaded!
Subscribe to:
Posts (Atom)