kRenderer

Introduction

kRenderer is a simple Java based rendering program that was built from scratch by Kyle Anderson for CSCI 263 at George Washington University. The only Java graphics used by kRenderer is the ability to draw a line to the screen and load an image (for texture mapping). The program loads three-dimensional meshes in the .D format, transforms the points into world coordinates, view coordinates, and finally normalized perspective coordinates based on parameters that can be set in the simple GUI. Finally, the image is drawn to the screen either by drawing a wireframe outline or by scan converting the mesh into fragments and then ordered by the z-buffer algorithm. More details of these processes follow.

Project 0 - Coordinate Transformation

First, the model is read into the program and the vertices are stored in model coordinates. Then the program applies the model view matrix to move the mesh into world coordinates. At this point, the normals of each polygon are computed and compared with the viewing direction to perform back-face culling. An example of the transformed coordinates with back faces culled can be seen below.

Project 1 - Scan Conversion

Next, each polygon is scan converted into an image fragment. To accomplish this, edges are created from the polygon which contain a starting y coordinate and the slopes of lines for the x and z coordinates, texture coordinates, color (in the case of Gouraud shading), and normals (in the case of Phong shading). Then, the space between edges is filled in (and interpolated between) using the odd-even rule. These coordinates are then drawn to the screen buffer if they are closer to the camera then the previous entry in the z-buffer at that pixel. An example of how the z-buffering algorithm can approximate mesh intersections can be seen in the image below.

Project 2 - Shading

kRenderer has three shading modes: Flat, Gouraud, and Phong. With flat shading, each polygon is filled with the same color, which is calculated based on the base color of the object along with the polygon normal using the Phong Illumination Model. In Gouraud shading, the color is calculated at each vertex, using the vertex normal, then the color is interpolated up each edge along the scanlines and between the edges within a scanline. Finally, in Phong shading, the surface normal is interpolated and the color is calculated at each pixel. The images below show the same mesh, first with flat shading, then Gouraud, then Phong.



Project 3 - Texture Mapping

Finally, textures can be applied to the meshes. To easily allow for a variety of image formats, I used the Java libraries to load the image. These textures need to be square, however. The texture is applied to the mesh by UV coordinates, which index into the texture map. The UV coordinates are generated very simply by simply taking the decimal value of the x and y coordinates of the point, respectively. This scheme provided fairly good (although sometimes somewhat unexpected) results. Below are three examples of texture mapped meshes.