Playing video games is not only fun but also an engaging activity which can lead to catharsis. But there is a novel dimension to this activity which many players enjoy even more. It is building the game itself. This is the reason why many games come with level editors. For me, video games are not only the mode of experiencing a given world, but also an opportunity to modify and enhance that world to express myself.
I find game programming intellectually satisfying. It basically translates to building an interactive world and providing a structure to that world by enforcing the rules set by the programmer, who, then, can witness the world evolve under those conditions. It is an ecstatic activity to watch your own creation unfold before your eyes which you experience as being an integral part of.
Almost all the games are made using Game Engines. Unreal Engine and Unity are the prime examples of open-source game engines available in the market. As a serious game developer, it is a good exercise to write your own game engine. It not only exposes you to the current state-of-the-art game development methodologies but also provides the complete control over the scheme of building process. With this in mind, I have embarked upon the journey of writing a game engine myself. I call it Karma and it can be found here.
A game engine has several sub-systems including a rendering engine. As the name suggests, its aim is to draw (render) graphics corresponding to the visible game objects (like player character, weapons, mountains etc), on the screen. The engine has to draw the entire scene quickly enough (typically one frame is rendered within 16.6 milliseconds) so that 60 frames can be drawn in one second and illusion of continuous motion (of game objects) can be generated in real time. In this blog-post we will be focusing on rendering graphics.
Computer graphics in real time are rendered using triangles. They are the building blocks of CG just like strings are the building blocks of matter in String Theory (or, for faint hearted, cells are the building blocks of life in biology). Triangles are optimal candidates for real time rendering because
- triangle is the simplest polygon
- triangle is always planar
- triangle remains triangle under most transformations
- virtually all the commercial graphics-acceleration hardware is designed for triangle rasterisation
A simple demonstration of triangles generating graphics can be found here and here. Therefore rendering a triangle in an efficient manner is a crucial step in real time computer graphics. I will demonstrate here how Karma does it.
First grab the code from my GitHub repository. GitHub is a service which provides software hosting (for free!) and version control using Git. One can track down the software in its various stages of development back in time. Make sure you have Git installed. Open the cmd (on Windows) or shell (Linux) and type
git clone --recurse-submodules https://github.com/ravimohan1991/KarmaEngine.git cd KarmaEngine git reset --recurse-submodules --hard 9e3046842db787eb847e636baf5058f5a6068fe9
The above lines of code basically download the Karma engine and reset the state of the local repository to the state, back in time, when first triangle was rendered by the Karma. Now depending of the platform proceed as follows
Windows
Make sure you have Visual Studio 2017 (or higher) installed. Go the the KarmaEngine directory, in the explorer, and click the file GenerateProjects.bat. This will create KarmaEngine.sln file which then can be opened in Visual Studio. Karma can now be built from the Visual Studio!
Linux
Type the following in Linux shell
vendor/bin/Premake/Linux/premake5 gmake make
This will build the Karma engine in the “build” directory from where you can run it. You will get something similar to what is shown in the first image of this blog-post.
So now that you have the code, let us first understand the theory behind rendering triangles. Later we will see how that is implemented in C++. In order to render a triangle, we need the information regarding its vertices. The information (vertex attributes) can be composed of the position of each vertex, the unit normal/tangent at each vertex, diffuse/specular color and texture coordinates for each vertex.
For simplicity, let us consider rendering a cube (using triangles, as shown in the figure)
and the information consisting of only position data. Then we need a data structure to store the triangular tessellation in effective way. We use two buffers
- vertex buffer: consists of all the vertices to be rendered
- index buffer: consists of triples of vertices that make up the triangles
This is done to avoid the duplication of the data and save the GUP bandwidth required to process the associated transformations (model space to view space or clip space). Once this is done, shaders are deployed to compute the attributes, like color, per pixel (by interpolation or using some texture maps).
To render graphics we will use OpenGL which is cross-platform graphics rendering API. If you are running Windows or Linux on modern hardware, chances are that your system already has OpenGL support. If we look into the code here we find
glGenVertexArrays(1, &m_VertexArray); glBindVertexArray(m_VertexArray);
This basically generates the vertex array (required by the core OpenGL profile, basic initialisation for vertex attributes) and binds it to the m_VertexArray variable id. Next we have
glGenBuffers(1, &m_VertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, m_VertexBuffer); // Clip space coordinates float vertices[3 * 3] = { -0.5f, -0.5f, 0.0f, 0.5f, -0.5f, 0.0f, 0.0f, 0.5f, 0.0f }; // Upload to GPU glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
Here we are generating the vertex buffer and binding it to m_VertexBuffer variable. GL_ARRAY_BUFFER is the target type which tells that OpenGL that we intend to use the buffer for vertex attributes. And finally we define the matrix of vertices specifying the coordinates of the vertices of the triangle in clip space (which spans from -1 to 1 in all directions). And finally we upload the data to GPU. GL_STATIC_DRAW means that we are not rendering the stream of dynamic data. It is static!
Next we have
glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), nullptr);
which tells OpenGL about the layout of the data we have specified so that it can be used by the default shader. glVertexAttribPointer tells that at index 0, there are 3 floats (GL_FLOAT), not normalised (GL_FALSE), stride is the amount of bytes between each vertices of vertex array (3 * sizeof(float)) and offset of the attribute which we set to nullptr.
Finally we do the same thing with index buffer, we generate and bind the buffer to m_IndexBuffer. Then we define the sequence of indices to be rendered in counter clockwise order and upload the data to GPU. The GPU then provides a default shader which colours all the pixels within triangle white. After the initialization, this piece of code draws the triangle on the screen. The end result is shown in the first figure of this blog-post.
So this is it! Once a triangle is rendered, we are a step closer in understanding how realtime graphics are rendered in games. From here we can start building up and render more complex surfaces leading to beautiful scenes that we desire to produce.