In this third assignment of the course, we ask you to setup the basic building blocks of OpenGL, which will allow you to implement afterwards the visualization part of the final project. We will also ask you to develop a very simple simulation, just so that you can begin considering what would be necessary for the final project assignment. Hence, it is expected for you to extend this implementation with the specific components of your project design. For instance, you might be interested on adding mouse interaction of the scene, or including support for off-line rendering, which is critical on Tegner (i.e., all the rendering is performed using the command line, without a physical display).
As we are not going to integrate CUDA in any of the exercises below, we recommend you to use the computers of the laboratory room to solve this assignment. You can also use your own laptop. This will give you a hands-on experience of the different aspects of OpenGL. But, above all, you should NOT use Tegner for now, as it would require you to add EGL support for off-line hardware accelerated visualization. This is out of the scope of the assignment, so check the course material on Canvas for more information about this topic towards the final project.
To submit your assignment, please, prepare a small report explaining the individual technical details and decisions taken to solve the assignment, alongside the experimental setup (e.g., GPU used, OpenGL version, libraries, ...). For the submission, create a folder that contains the report in PDF format and the source code. The folder must follow the structure illustrated below:
Feel free to include any extra files that your code requires inside the “source” folder, such as header files, classes, or dependencies. In addition, note that the exercises are designed to help you build step-by-step your own OpenGL-based visualization framework. Compared to the CUDA assignment, we do not expect from you to submit separate implementations for each exercise, but a single project with all the exercises solved. We provide you with a base source code repository on the first exercise to help you out.
When you have everything ready, create .tar.gz file (or .zip file) with the folder and upload it to your assignment in Canvas using the following name:
The assignment is individual, but students are once again encouraged to discuss the exercises with the rest of the classmates. We also invite you to join the discussion on Canvas and share your experiences over there. Understanding OpenGL is probably one of the trickiest topics of the course, so do not forget to join the discussion!
Exercise 1 - Your first OpenGL application!
In principle, we can assume that every OpenGL application requires a software component that handles the window creation and interaction (e.g., input events), a component that loads the OpenGL API in order for it to be used, and an implementation of the OpenGL specification. Therefore, the first step is to make sure you have all the dependencies correctly installed and ready to use. This strictly depends on the machine you are using and the operating system that you prefer, so do not feel frustrated if it takes longer than you initially expected.
For this assignment, we recommend you to use FreeGLUT for the windowing system (you might also use GLFW like in the video lectures), GLAD to load the OpenGL API, GLM to handle the matrix / vector operations, and the OpenGL implementation that is provided automatically with the driver of your graphics card manufacturer. We give you later a source code repository that builds GLAD and GLM, so you just have to double-check that you have FreeGLUT and OpenGL support in your machine. Use ldconfig to confirm that the libraries are installed:
ldconfig -p | grep libglut
ldconfig -p | grep libGL
These two dependences are usually available on every platform, but if the previous commands did not give you an output, you might need to look for the specific installation details for your machine. For instance, if you plan to use your own laptop with Ubuntu (or equivalent distributions), you can easily run the following command to make sure everything is installed:
sudo apt-get install xorg-dev libgl-dev libegl1-mesa-dev freeglut3-dev libglu1-mesa-dev
In some other cases, you will need to download and manually compile the libraries, but this is rarely the case! Above all, keep always in mind that the workstations available in the laboratory rooms have everything prepared for you to solve the assignment.
After installing the dependencies, let us test if everything is fine by cloning a repository that contains the base source code for the OpenGL assignment. Execute git clone to retrieve the content:
git clone https://bitbucket.org/SRGm8/agp-opengl-hw3.git
After cloning the repository, you must observe the following structure under the "agp-opengl-hw3" folder:
The "modules" folder contains GLAD and GLM, which are needed to load the OpenGL API and to perform matrix / vector operations, respectively. The "scripts" folder contains a script to build GLAD and GLM ("build_libs.sh"), and another script to compile and execute your application ("run.sh"). The rest of the files contain the basic building blocks for you to implement the exercises of the assignment. Here is a brief description for each file:
- main.cpp: This is the main application that initializes OpenGL, and launches the main rendering loop. You should ideally work only on this file only to solve most of the exercises of the assignment.
- common.hpp: As a good coding practice, this file includes all the header files and constants that your source code will probably require. Feel free to update it as needed.
- util.hpp / util.cpp: This helper utility class contains a method to load and compile the Vertex and Fragment Shaders, and also a method to show information about OpenGL.
- Makefile: This will help to compile your source code, leaving the executable inside a "bin" folder.
At this point, to solve the first exercise, you must complete the following tasks:
- Access the "agp-opengl-hw3" folder and run the "build_libs.sh" script to build GLAD and GLM.
- You must run the script as follows:
- If errors arise, ensure that you have the dependencies installed, as per the instructions given previously. Do not feel frustrated, it takes time to solve these issues!
Build the application using the "run.sh" script, which should also execute it and open a new OpenGL window if everything is correct.
You must run the scripts as follows:
- Once again, in case that errors arise, make sure that you have the dependencies installed.
Note that you might need to modify the Makefile to specify the correct path to FreeGLUT and OpenGL, if these are not installed under the default path (e.g., "/usr/include/GL" and "/usr/lib"). In that case, specify the correct path with the -I and -L flags. In addition, modify the Makefile inside the GLAD folder if the library is not building either.
At this point, if you did not have any compilation issues, you must see a small window with green background when you execute the "run.sh" script:
If you do, congratulations, this is a very important step! If you are still struggling, do not worry, review the exercise from the beginning and also share your issues on Canvas. We are all here to help out and enjoy together coding OpenGL!
Exercise 2 - Your (second) first OpenGL application!
For the second exercise of the laboratory, we ask you to create your very first OpenGL application based on the repository that we have provided you in the previous exercise. This application will define the OpenGL Context and create a new window with a specific size, which will be useful for you to display 3D renderings later. For this purpose, you must solve the following challenges:
- Open the "main.cpp" and "Makefile" files, and try to understand their content.
- Make sure that you examine carefully the source code and also read the comments available in both files.
Hint: Take your time for this task, it is important.
Configure the windowing library to use RGB (or RGBA), and setup double-buffering to allow for front and back buffers support.
- Double-buffering will prevent flickering and other undesirable visual artifacts.
- With FreeGLUT, you can use glutInitDisplayMode() to easily configure these rendering options.
Create the window with a specific size (e.g., 1280x720 pixels).
Replace glFlush() with a combination of glutSwapBuffers() and glutPostRedisplay().
- Right now, the code uses glFlush() to flush the rendering commands, as it was assuming that only one (front) buffer was set. As we want double-buffering support, we need to make sure that OpenGL swaps the front and back buffers.
- Note that using glutPostRedisplay() will force the rendering loop to remain constantly active, instead of rendering once. You will see that the display() callback is nowfrequently called, so feel free to remove the output message.
Change the background color from green to black.
- The color format is RGBA, and the values vary from 0.0 (minimum) to 1.0 (maximum).
Capture the keystrokes of your keyboard and print on the command line the keys pressed.
It is enough for you to use printf to output each keystroke.
Capture the “ESC” key in your keyboard and safely close the application when you press it, releasing any memory that you have allocated and closing down the render loop.
- Hint: The "ESC" key, as any other key in the keyboard, correspond to a value in the ASCII chart. This is not the case for the arrows and the "FN" keys.
After you have finished, you should be observing a window like the following:
If you are struggling to open the window or no background visible, there are several aspects that you should consider:
- Initialize the windowing library (e.g., FreeGLUT) and configure / create the window before calling any other OpenGL functionality.
After this, load the OpenGL API (e.g., with GLAD) before accessing any OpenGL-specific functions. If you do not load the API, a segmentation fault may raise.
Make sure you have launched the main render loop and that you still have the callback function to display the scene.
- Make sure you force OpenGL to keep rendering by swapping the buffers and forcing refresh.
This is probably the most important step, so try to ensure that everything is working in your application before proceeding. The following links may help you solve the exercise:
Exercise 3 - Defining the Program Object: Vertex and Fragment Shaders
When we have our window opened and a rendering loop active, it is time to show up 3D objects in our scene! For this purpose, we must properly define the Vertex and Fragment Shaders in use. Without them, it would be difficult for OpenGL to understand where do we want each vertex to be displayed or the color of the mesh. Thus, the idea is to solve the following challenges:
Define the Vertex and Fragment Shaders for your scene.
- You should create two different .glsl files (i.e., one for each Shader).
- For the Vertex Shader, copy the vertex position coming from the input as the output vertex (i.e., gl_Position), with no intermediate transformation.
- For the Fragment Shader, make sure you use a bright color for the output.
Create and bind an OpenGL Program Object that contains your Shaders.
- Use the util helper functionality to load the Shaders and create the OpenGL Program Object (i.e., you do not have to implement your own loader).
Bind the Program Object to the current OpenGL Context with glUseProgram().
Render a 3D object on-screen by introducing the necessary primitives inside the display() callback function.
- A sphere or a cube would be preferable, but you can start with something basic, such as a triangle.
Hint: You can easily render a cube or a sphere by calling pre-defined shapes from FreeGLUT (e.g., glutSolidSphere()).
When everything is ready, compile and run the application using the "run.sh" script. You will notice that the scene looks plain flat and that the “3D” object is stretched . The reason for this effect is due to the fact that we have not established any of the necessary transformations for the scene that define the perspective of the scene, or even the position of the virtual camera:
In the screenshot, we are using glutSolidSphere() with a radius of 1 to draw the object, and a light-green color as output from the Fragment Shader. If we were using lighting, the sphere would have looked slightly more realistic. After all, you need to keep in mind that the purpose of the rasterization process is to represent a virtual 3D scene into something that can be displayed on a 2D screen. But, do not worry, we are not covering lighting in this course and it is not a requirement for the final project. You can alternatively use some tricks, such as displaying the wireframe of the object and/or playing with the alpha channel to give the feeling of depth. We will ask you to enable these features later within this assignment.
The following links may help you solve this exercise:
Exercise 4 - Creating the Model-View-Projection
The current scene is not really representing what we would have called a “3D scene”. We ask you now to define the Model-View-Projection (MVP) to transform the render into something more reasonable. Therefore, we expect you to complete the following tasks:
Create a uniform input named MVP for the Vertex Shader that contains the MVP matrix.
- Multiply this matrix to the vertex position to generate the correct output vertex position from the Vertex Shader.
- Use the matrix types of GLSL (e.g., mat4).
- Create a Model M matrix that contains the position of the object in the virtual 3D world.
- For simplicity, set the object at the origin (0,0,0).
- Hint: You can define and alter the content of this matrix before rendering your object, which will allow you to easily solve the next exercise.
Define a View V matrix that contains the position and orientation of the virtual camera.
- Initially, situate the camera at a certain distance by translating it over the positive Z-axis.
- Hint: The positive direction of the Z-axis is pointing "towards you", following the "right-handed" coordinate system.
Define a Perspective Projection P matrix to represent the field-of-view (FOV) of the camera and set-up the near / far clipping planes.
- Like in real-life cameras, the FOV defines how objects are projected into the image plane. The larger the FOV of the camera, the bigger the distortion but also the more the content that is captured (and vice versa).
- Optional: The aspect ratio depends on the resolution of the window, which might change over time. Try to capture this event with glutReshapeFunc() and alter the viewport with glViewport().
Multiply the M, V and P matrices to generate a single MVP matrix. Set this matrix as the uniform input for the Vertex Shader.
- The index of the MVP uniform inside the Vertex Shader can be obtained using glGetUniformLocation().
- In order to set the value, use glUniformMatrix4fv() to define the MVP for the scene for each render.
- Hint: The order of the multiplications matters!
In order to generate the correct values for the MVP, you can use functionality from GLM directly, such as glm::lookAt() and glm::perspective(). Transformations such as glm::translate() and glm::rotate() can be useful to position your 3D model in the virtual world. In addition, use GLM types to resemble the GLSL types (e.g., glm::mat4). If everything is correct, you should see a scene similar to the one below:
The following links might help you solve and understand some of the tasks requested on this exercise:
Exercise 5 - Visualizing Particles
The last step is to create a small simulation using the CPU and multiple particles. The idea is to alter the position of each particle over time on the CPU and use this information to render the different particles on-screen with OpenGL. The type of simulation being performed does not really matters. For simplicity, if you prefer, you can use the same CPU implementation of the CUDA assignment. The important aspect for this exercise is that you understand how to render more than one object in different positions, which will be useful in preparation for the final project.
Therefore, we ask you to complete the following tasks:
Enable support for the Alpha channel and alter the opacity of your 3D object inside the Fragment Shader.
- Use glEnable() and glBlendFunc() to enable the support for the Alpha channel.
- The output color in the Fragment Shader is defined as a vec4 vector type, where x represents Red, y represents Green, z represents Blue, and w represents Alpha (RGBA).
Optional: Draw the wireframe of the object to increase the depth feeling of the scene. For instance, use glutWireSphere() directly, or set GL_LINE with glPolygonMode() before rendering. You will need to render the object twice, one for the "solid" version and another one for the "wired" version.
Instead of a fixed point-of-view in the scene, capture the keyboard arrows "left" and "right" to alter the eye (position) of the virtual camera in the 3D world.
You must rotate the camera around the origin (0,0,0) in the XZ plane, following a circle.
- Keep it simple if you cannot find a good solution, but try to end-up with an implementation that feels natural and corresponds to how you would expect the camera to react.
- Optional: Capture "+" and "-" to zoom-in and zoom-out, respectively. You can simply alter the FOV parameter of the camera while defining the perspective projection matrix.
Create an array of size NUM_PARTICLES that contains the position of each particle in the 3D view.
- Use the vector types of GLM (e.g., glm::vec3) for each element of the array.
- You can also use the declaration of the struct Particle from the CUDA assignment, but integrating the GLM types instead. These are compatible with CUDA as well.
- The number of particles is not relevant for the exercise, but at least try to set a reasonable number (e.g., 200).
On each render inside the display() callback function, update the position of every particle in the array and render each 3D object by altering the model M matrix.
- Hint: This means that you must set a different MVP uniform value per render.
Now that you are rendering more than one element on-screen, you might have experienced that something is wrong and that the screen is filled-up (or almost). The reason is that you need to clear the Back Buffer before rendering. Otherwise, OpenGL will render over the previous renderings! To overcome this issue, we use the glClear() function at the beginning of your display() handler. We already provided a call to this function in the base "main.cpp" file, so make sure that it is still there!
At this point, you will see a fun and exciting 3D scene! Here is an example with just a few particles represented by large spheres:
The following links might help you solve some of the challenges of this exercise:
Important note: The method suggested on this exercise to render each particle is inefficient and only suitable for a few hundred elements. Other techniques, such as Instanced Rendering, must be used to achieve better performance and integration with CUDA. We cover these topics in the course material for the final project. For now, this is enough for the assignment.
(Optional) Exercise 6 - The Render Object
Based on the implementation from the previous exercises and towards the final project, we suggest you to optionally create a Render class that contains all the necessary elements to render in OpenGL. This class can be defined as abstract, allowing your application to dynamically support different types of rendering. The following UML diagram exemplifies the inheritance hierarchy:
In this figure, we show two possible subclasses that allow for different rendering methods (e.g., rendering on-screen and also off-line rendering to files). The abstract class in the example has three methods that are common for each type of rendering, but it is up to you to decide the most convenient design that fits your implementation details. You can also choose not to use exactly this hierarchy of classes and come up with your own design. In any of these cases, the goal is to define a modular and well-design source code, following good software engineering practices.
The only task left for the assignment is to define a base abstract class that enables rendering (e.g., Render) and a specific implementation for rendering on-screen (e.g., ScreenRender). You should be able to implement this latter class with the content from the previous exercises.