Feb, 2024

CS 184/284A: Computer Graphics and Imaging, Spring 2024

Homework 1: Rasterizer

Colin Steidtmann


I did something amazing: I programmed triangles on a screen! Before this class, I took computer graphics for granted. Pixels are little squares, so besides lines that go perfectly vertical or horizontal, it's remarkable that we can put together pixels to form circles or diagonal lines that create triangles. In this assignment, I learned and implemented algorithms behind displaying smooth diagonal lines and images we often take for granted. Along the way, I learned why graphics in old-school video games look so pixelated and struggled in my attempts to create SVG files by hand (see the end).

Vocab to be aware of 

Why Use Triangles? We only need 3 points to define one, all three points of a triangle always lie on the same plane, modern GPUs are specifically designed to process triangles efficiently, and any 3D shape, no matter how intricate, can be approximated by combining numerous triangles.

Rasterization: The process of finding all the pixels in an image that are occupied by a geometric primitive

Aliasing: Poor renderings like jaggys, wagon wheel illusion (false motion), moiré patterns (caused by under sampling). Caused by fast changing signals (high-frequency), when we sample too slowly. Sampling a high frequency function too slowly can produce a slow frequency function that we can’t detect besides poor visual effects.

Raster Display: Raster displays show images as rectangular arrays of pixels.

Sampling: Storing values of a function so that you can reconstruct the points in-between when necessary. Video=sampling time. Photograph=sampling physical world light. Rasterization=sampling 2D positions. 

Supersampling: Create images at very high resolutions and then downsample. Avg. NxN pixels to a single pixel value. Anti Aliasing method.

Downsample: reduce pixel count by averaging values

Subsample: reduce pixel count by skipping pixels

Level Sampling: reduce pixel count by selecting representative pixels at different levels of detail

Pixel Sampling: the process of determining the color of a pixel on a texture map. Nearest neighbor sampling selects the color of the nearest texel, while bilinear sampling interpolates between the colors of the four nearest texels for smoother results.

Interpolation: Estimating the value of a function or data point between known values.

Homogeneous Coordinates: Point = (x,y,1). Vector = (x,y,0). Valid operations if w coordinate (3rd coordinate) is 1 or 0. E.g. Point-Point is okay but Point + Point is not.

Barycentric Coordinates: The coordinate system that makes such interpolation straightforward. Coordinate system for triangles. 

Bilinear Interpolation: A technique used to estimate the value of a point within a two-dimensional grid by considering the values of its four nearest neighbors. It's essentially a fancy way of averaging those neighboring values

MipMaps: pre-filtered, downsized versions of the original texture arranged in a kind of pyramid: Level 0: Full resolution texture (original detail), Level 1: Half the resolution (less detail), Level 2: Quarter the resolution (even less detail), ... and so on. 

Trilinear Filtering: Blending adjacent MipMap levels (D and D+1) to reduce aliasing

Texture coordinates: (u,v) coordinates of the texture that get mapped to the image and 3D mesh

Section I: Rasterization

Part 1: Rasterizing single-color triangles

To rasterize triangles (render triangles on the screen), I created a function that took in the 3 (x,y) coordinates of my triangle and iterated over it's bounding box, testing whether each pixel is in the triangle or not. This is a type of sampling. If a pixel was in the triangle then I colored it in. 

I tested if a point was inside a triangle by calculating the dot product of the normal vector for each edge of the triangle with the vector connecting the point to the corresponding vertex. If all three dot products were positive, it meant the point was inside the triangle, and I colored it in. Each positive dot product indicated the 'push' from the triangle's wall, confirming the pixel's location within the shape.

Later, I had to update my code to also color in pixels if all three dot products were negative. This was necessary to make my implementation robust to different ways the triangle vertices could be ordered. Negative dot products can be interpreted as all normals "pulling" the point inwards, effectively flipping the concept of "inside" and "outside" based on the reversed winding order. So, in this case, a point with all negative dot products would actually lie inside the counterclockwise triangle.

This simple method of rasterizing triangles can cause visual artifacts like jagged edges and disconnected triangle pieces. This is a form of aliasing where the sample frequency (pixel size) is too slow for the fast-changing signal (triangle edges). This mismatch between the continuous geometry and the discrete representation of pixels leads to the visible artifacts.

Part 2: Antialiasing triangles

To implement antialiasing for my triangle rendering, I introduced supersampling by increasing the sampling rate to capture high-frequency details. This involved adding a sample_rate variable and sampling sample_rate times within each pixel. The individual colors were then averaged to determine the final pixel shade. This approach replaces my previous method of simply checking if the pixel's center was inside the triangle; now, the shading reflects the percentage of samples found within the triangle's area.

sample_rate = 1

sample_rate = 4

sample_rate = 16

As you can see, supersampling produces much smoother transitions along the triangle boundary due to the increased sampling rate within each pixel. This is achieved by evaluating sample_rate points and averaging their results. However, this improvement comes at a cost. I had to increase my buffer size that stores sample points by a factor of sample_rate and increase my runtime by the same factor.

Part 3: Transforms

The following robot images were created using SVG file formats, which used the transformation, rotation, and scaling functions I implemented. 

Pose 1

First, I demonstrated a simple color change.

Pose 2

Next, I made the robot rainbow-colored and oriented the legs to point inwards.

My code implements simple transformations by multiplying the image points (as a vector) by a transformation matrix. In computer graphics, we add a "homogeneous coordinate" to the image point vector. This extra dimension allows us to represent translations using simple matrix multiplication alongside rotations and scaling. Any transformation can be decomposed into one or more rotations, scaling, and translation matrix multiplications. To rotate something not around the origin, we first translate it to the origin, rotate it around the origin, then translate it back to its original position. The three matrix transforms that we multiply together are: Translate(-c) dot Rotate(c) dot Translate(c).

Extra Credit: All the images in this report came from a GUI that displayed what I had rasterized. To earn some bonus points, I added the ability to rotate the viewport clockwise and counterclockwise using the keys N and M, respectively. This involved finding the code that dealt with keyboard events, adding my extra conditional keys (N & M), and calling a function I wrote to rotate the viewport. Inside my rotation function, I first got the device coordinates by converting from the SVG coordinates. I then translated the viewport center to the origin, rotated it by the degree amount passed in as an argument using a matrix multiplied with the device coordinates, and finally translated the viewport center back to the device center. The results above show screenshots taken every PI/8 rotation, first rotated counterclockwise and then clockwise. Finally, I uploaded all the images to a site that could create a GIF from them.

Section II: Sampling

Part 4: Barycentric coordinates

Barycentric coordinates are used to represent the location of a point within a geometric shape, such as a triangle. Instead of using standard x and y coordinates, you express the point's position relative to the vertices (corners) of the shape. To calculate barycentric coordinates, you label the triangle's corners as A, B, and C, and calculate three corresponding parameters - alpha, beta, and gamma - that indicate how close the test point is to each corner (alpha + beta + gamma = 1).

Barycentric coordinates are widely used in computer graphics for shading, texture mapping, animation, Bézier surfaces, and more. In the triangle example, we assign different colors to each vertex and then use the alpha, beta, gamma weights mentioned earlier to interpolate colors in the middle. A circle, having more triangles, can create even better color gradients due to the increased number of points available for interpolation, resulting in smoother gradients.

Part 5: "Pixel sampling" for texture mapping

My next challenge was to incorporate textures into the triangles using a texture image where each pixel is a "texel." The triangle's corners are assigned (u,v) coordinates corresponding to the texture region they should map to. Using the alpha, beta, and gamma values, (x,y) coordinates within the triangle are mapped to (u,v) coordinates near the nearest texel in the texture, known as pixel sampling. Two pixel sampling methods are commonly used:

Nearest Neighbor (1 Sample Rate)

Bilinear (1 Sample Rate)

The first two images demonstrate when bilinear sampling outperforms nearest neighbor. At a sample rate of 1, which corresponds to the original texture resolution, bilinear sampling's gentle blending eliminates pixelation and aliasing, resulting in smoother transitions compared to the harsher edges of nearest neighbor.

Nearest Neighbor (16 Sample Rate)

Bilinear (16 Sample Rate)

However, these next two images show nearest neighbor outperforming bilinear at a sample rate of 16. At this high sample rate, nearest neighbor can sample 16 times per texel, creating perfectly colored pixels. In contrast, bilinear sampling selects 4 surrounding texels, introducing shimmering and aliasing. Additionally, I observed increased heat generation from my computer due to the high sample rate and computationally intensive bilinear sampling.

Part 6: "Level sampling" with mipmaps for texture mapping

To enhance texture mapping and reduce aliasing, another method involves using Mipmaps, which are multiple scaled-down and lower resolution versions of the original texture. By calculating the rate of change of texture coordinates across a surface, we can determine the level of detail needed and select the appropriate Mipmap. This process, known as level sampling, filters out high-frequency signals by downsampling the higher resolution mipmaps, effectively averaging the color information of neighboring texels. This reduces aliasing by smoothing transitions between pixels, resulting in a more natural appearance, especially at angles or when scaled up or down. Mipmaps are stored efficiently as each additional level consumes less memory. With level sampling, we choose between the original texture image (level 0), the nearest level (determined by the rate of change), or a bilinear combination of two nearest levels, which is often the most optimal when the desired level sits between two levels.

Now we can compare a new set of sampling methods:

Ultimate Comparison

First we'll see that on a zoomed-out image with lots of detail and using the nearest texel, interpolation across different Mipmap levels doesn't provide significant benefits.

Level 0, Nearest Neighbor, 1 Sample Rate

Nearest Level, Nearest Neighbor, 1 Sample Rate

Bilinear Level, Nearest Neighbor, 1 Sample Rate

Next, instead of using the nearest texel on a zoomed-out image, using the nearest 4 texels helps tremendously, especially when using the appropriate Mipmap level.

Level 0, Bilinear Neighbor, 1 Sample Rate

Nearest Level, Bilinear Neighbor, 1 Sample Rate

Bilinear Level, Bilinear Neighbor, 1 Sample Rate

While the above images show that choosing the appropriate Mipmap level helps greatly when sampling only once per pixel, the results are closer when sampling 16 times per pixel, as seen below.

Level 0, Nearest Neighbor, 16 Sample Rate

Nearest Level, Nearest Neighbor, 16 Sample Rate

Bilinear Level, Nearest Neighbor, 16 Sample Rate

In my opinion, the results above hit the sweet spot for displaying this image. However, for fun, we can look at bilinear sampling, which averages the nearest 4 texels. It seems to over-blur when sampling 16 times per pixel, as shown below:

Level 0, Bilinear Neighbor, 16 Sample Rate

Nearest Level, Bilinear Neighbor, 16 Sample Rate

Bilinear Level, Bilinear Neighbor, 16 Sample Rate

It's similar to selecting the best smartphone photo after a certain level of antialiasing. As you invest more effort into methods like supersampling, bilinear pixel sampling, or texture image level sampling, there are diminishing returns. Eventually, the antialiasing efforts can actually degrade the image quality rather than improve it.

Section III: Art Competition

Part 7: Draw something interesting!

As a bonus, this assignment gave students the opportunity to compete in an art competition. Since not many uploads were shared in our class discussion channel on the night the assignment was due, I decided to give it a shot. My first idea was to create the text "CS 184 Rasterizer" and make "Rasterizer" super jagged. Unfortunately, our rasterizer program is very simple and cannot work with text. It can only work with SVG Points, Lines, Polylines, Rects, Polygons, and Group classes. Given these constraints, I tried to go for something simpler, something I would draw as a kid: two mountains with a sunset in the middle. I spent nearly an hour trying to get my sun to appear until I realized that SVG's "Circle" didn't work in our rasterizer environment. So I had to get creative and create a sun using a square and four triangles. It took a while to position everything correctly, as every shape has to be specified by its coordinates of where it should go in the viewport, but eventually, I got things working. I tried to add an additional lightning bolt to come out of the sky as I saw something that looked like a lightning bolt while looking at the MDN docs for Polylines, but it didn't come out too well. To make my art submission stand out from the competition, I combined it with my extra credit feature from task 3 and made it rotate and made a GIF out of it. Below are the results; enjoy. 🙂

Rotating Custom Art

Two mountains, blue sky background, brown ground, sun in the middle, lightning bolt in the sky

Still Custom Art

Two mountains, blue sky background, brown ground, sun in the middle, lightning bolt in the sky