So far all I’ve mentioned are the adventures I’ve been having! If you’re wondering, as my Grandpa Joe put it: “when the hell are you going to work”, then this post is for you :)
About My Project
I am building a program that will allow users to load 3D models (.obj file format) and interact with them via transformations such as rotation, translation, deformation etc. My ultimate goal is to implement the ability to select parts of a volume mesh, so the user can select and interact with seperate parts of their model. I am using C++, OpenGL, and QT for my project.
I was first inspired to learn about computer graphics when my (amazing) materials science professor told me about his son who works on lighting movies at Pixar. I have always loved developing my technical background, but I also love painting and drawing and creating visual things. Until I learned about the opportunity to work in computer graphics, it never occurred to me that I could excercise both of my interests at the same time.
I have virtually no experience with C++, definitely none with OpenGL and I hadn’t even heard of QT until my supervisor told me about it. It is also my first time building a GUI (Graphical User Interface … the window and buttons). Initially, I was feeling a little lost and intimidated while trying to familiarize myself with all three new things simultaneously. So I completeley understand if this is too much computer science jargon. I won’t be offended if you just look at the pictures. But hopefully some of you will find this interesting :)
After a lot confusion (days), my supervisor helped me organize my learning strategy and I finally got a program running.
Next I worked on loading actual objects. To do this, I needed to read information from a user selected .obj file. These files are lists of the vertices (lines beginning with v), and faces (lines beginning with f). Each face is a triangle, made up of three of the vertices.
Once I have all of the information from the file stored, I have to iterate through each face and ask OpenGL to draw the respective triangle. I’ve been using some models from the Berkeley Garment Library as test files.
Yay!! I’ve rendered a model! But it isn’t very exciting looking. It’s supposed to look 3D. It looks flat because it hasn’t been lit yet. If you draw a sphere on a piece of paper, and paint it all one color, it looks like a circle. You need to add lighting and shading to make it look like a sphere, right? Same idea. Cool, I’ll enable lighting.
Ok so enabling lighting isn’t enough. Now all of the models I load are just black. At this point, OpenGL has no idea how this model is supposed to be lit. I can’t just expect OpenGL to know that my vertices are supposed to look like a robe.
In order to really light the model, OpenGL needs the normals for each vertex. A normal is a vector that is perpendicular to the face, that points outward. Knowing this, OpenGL can determine where each vertex is actually facing at any given position of the model, and depict the light accordingly. For the most accurate, smooth shading, the normal for each vertex should be an average of the normals of all of the faces that the vertex belongs to. But right now, for simplicity I am just using one.
Cool! Now the models are starting to take some shape :) But obviously something is still wrong. After a couple hours, I realized there was a  that was supposed to be a  in my normal calculations.
Getting the angel to load was really exciting. This is something I was looking forward to from the beginning, when I just had a triangle. But, the black part is there because the angel is crossing the “clipping plane”. The clipping plane is where OpenGL determines that the face is either too close or too far along the z axis to be drawn. Have you ever played a video game and once you get close to an object it dissapears? So my next task is to make sure that no matter where the coordinates of the model are, my program finds them and displays the model on the screen.