Making a 3D Game Engine

RENDERING: OPENGL | LANGUAGE: C++ | TEAM SIZE: SOLO

JUNE 2022 – ONGOING

What you see above is a snippet of gameplay from the engine I made. It took two years of me working alone to build it to this point.

Ever since I began to study computer science, I have always wanted to make a 3D game engine. This is because I really enjoy working with the 3D math concepts involved. Whenever I play a 3D video game I always wondered how the game knew when two objects were colliding, or how it was rendering the objects on screen using a virtual camera.

I made this game engine so I could have the satisfaction of learning the answers to these questions.

The Collision System

Above: my collision system in action

“Collision detection” is a deceptively simple sounding concept, because it sounds like the answer to a yes/no question (“Are these objects intersecting?”). However, the answer to that question is only a part of what you need when resolving a collision.

To really make it work, I also needed to answer questions like “by exactly how much is this object penetrating the other?” and “what point do they intersect at?”. In order to make them react as convincingly they are in the shot above, I used a solution that could work on any axis.

That’s why I researched in order to learn how mathematical concepts such as the Minkowski Sum could be used to to detect and resolve collisions between convex three dimensional polyhedra of any kind. It’s essentially the combining of two convex shapes in an imaginary space.

I came across and learnt about the way this is used by the Gilbert-Johnson-Keerthi algorithm for detecting collisions, and the Expanding Polytope Algorithm for finding the exact penetration vector needed to resolve said collisions. Then, I wrote code to implement these.

Building and debugging my implementation of the algorithms involved has been a very involved journey over the last few years. I am pleased to say I have brought it to a very consistently functioning point.

The Graphics System

On the graphics side, I built the rendering pipeline using OpenGL. I began to use the shaders to do more than just debug-draw the hitboxes.

For example, I used the vertex shader to create responsive blades of “grass” that the player could interact with. I learnt how to do this from the work I did for Blizzard as an intern- it was fun to put that into practice even after the internship ended.

That was one of the ways I used technology to find ways to bring the environments to life. I also implemented time-based color cycling on some of the texture rendering to make them more vibrant.

Another interesting problem was finding a way to make the player visible to the camera even during the moments when there’s some objects obstructing the line of sight. To solve this, I used the fragment shader detect when a part of a polygon was in between the player and camera and made that region less opaque.

Another technical rendering challenge I addressed was camera control itself. It was tricky because the engine demo I was making allowed the player to move around on a planet, which called for unconventional camera behavior.

After trying out several solutions I ended up going with having the planet be the camera’s focus, and to have it rotate based on the player’s angle from the center. This is one of the examples of the kind of problem solving I needed to do to support the unique gameplay of the demo. The gameplay involves, after all, ice skating and sword fighting on frozen planets of various shapes.

It thrills me to see how far this passion project has come and to think of how much further it can go.