Loading

Mega May PDF Sale - NOW ON! 25% Off Digital Certs & Diplomas Ends in : : :

Claim My Discount!

Module 1: Introduction to Computer Graphics

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Hello and welcome to lecture number 4 in the course Computer Graphics.
the series of steps that are involved in the generation of color values are together called the graphics pipeline. This is very important terminology and in our subsequent lectures, we will discuss in detail the stages of the pipeline. So let us get some introductory ideas on the pipeline and its stages.
There are several stages as I mentioned, first stage is essentially defining the objects. So when
we talk of creating a scene or an image, it contains objects. Now there needs to be some way to
represent these objects in the computer. That activity where we define objects which are going to
be the parts of the images constitute the first stage of the pipeline which is called object

representation stage. For example, as you can see in this figure on the screen we want to generate
the image of a cube with color values as shown on the right hand part of the screen.
Now this image contains an object which is a cube and on the left hand side here we have
defined this cube. So when we talk of defining what we mean essentially as we can understand
intuitively, defining the cube involves specifying the vertices or edges with respect to some
reference frame that is the definition in this simple case that is what are the vertices or what are the edges as pair of vertices.
Of course, the cube is a very simple object, for more complex objects we may require more complex definitions, more complex ways of representing the objects.
Accordingly, several representation techniques are available for efficient creation and efficient
manipulation of the images. Note here on the term efficient, so when we talk of this term
efficient, essentially what we refer to, we refer to the fact that the displays are different, the
underlying hardware platforms are different. So whatever computational resources we have to
display something on a desktop or a laptop are likely to be different with respect to whatever we
have to display something on a small mobile device or on a wearable device screen.
Accordingly, our representation techniques should be able to utilize the available resources to the extent possible and should be able to allow the users to manipulate images in an interactive setting. So the efficiency is essential with respect to the available computing resources and the way to make optimum use of those resources. Now once we define those objects, these objects are then passed through the subsequent pipeline stages to get and render images on the screen. So the first stage is defining the objects and in the subsequent stages, we take these object definitions as input and generate image representation as well as render it on the screen.
What are those subsequent stages? First one is modeling transformation which is the 2nd stage of
the pipeline. Now as I said when we are defining an object where considering some reference
frame with respect to which we are defining the object. For example, the cube that we have seen
earlier. To define the cube, we need to define its coordinates but coordinates with respect to
what? There we will assume certain reference frames.
Now those reference frames with respect to which the objects are defined are more popularly
called local coordinate of the object. So the objects are typically defined in their own or local
coordinate system. Now multiple objects are put together to create a scene, so each object is
defined in its own or local coordinate system and when we are combining them we are
essentially trying to combine these different reference frames.
By combining those different objects, we are creating a new assemble of objects in a new
reference frame which typically is called world coordinate system. Take the example shown on
this figure. So here as you can see there are many objects, some cubes, spheres and other objects,
cylinders. Each of these objects is defined in its own coordinate system.
Now in this whole scene, consisting of all the objects, this is the whole scene, here we have assembled all those objects from their own coordinate systems. But here again we are assuming another coordinate system in terms of which this assembling of objects is defined. So that coordinate system where we have assembled them is called the world coordinate system. So there is a transformation, transforming an object from its own coordinate system to the world coordinate system. That transformation is called modeling transformation which is the 2nd stage of the graphics pipeline.
So in the first stage we define the objects, in the second stage we bring those objects together in the world coordinate system through modeling transformation which is also sometimes known as the geometric transformation. So both the terms are used either modeling transformation or geometric transformation that is the 2nd stage of the graphics pipeline.
Now once the scene is constructed, the objects need to be assigned colors which is done in the 3rd stage of the pipeline called lighting or illumination stage. Take for example the images shown here. In the left figure we have simply the object, in the right figure we have the color. So the, we have applied colors on the object surfaces. Now as you can see the way we have applied colors, it became clear which surface is closer to the viewer and which surface is further.
In other words, it gives us a sensation of 3D, whereas without colors like the one shown here, that clarity is not there. So to get realistic image which gives us a sensation of 3D, we have to assign colors. Assignment of colors is the job of the 3rd stage which is called lighting or illumination stage.
Now as probably you are aware of color is a psychological phenomenon and this is linked to the way light behaves or in other words, this is linked to the laws of optics. And in the 3rd stage, what we do? We essentially try to mimic these optical laws, we try to mimic the way we see color or we perceive color in the real world and based on that we try to assign colors in the synthesized scenes.
So first we define an object, 2nd we bring objects together to create a scene, 3rd stage we assign
colors to the object surfaces in the scene. Now till this point, everything we were doing in 3D

setting in the world coordinate system. Now when we get to see an image, the computer screen is
2D, so essentially what we require is a mapping from this 3D world coordinate scene to 2D
computer screen. That mapping is done in the 4th stage that is viewing transformation.
Now this stage we perform several activities which is similar to taking a photograph. Consider yourself to be a photographer, you have a camera and you are capturing some photo of a scene.
What you do? You place the camera near your eye, focus to some object which you want to capture and then capture it on the camera system and also this is followed by seeing it on the camera display or camera screen, if you are having a digital camera. Now this process of taking a photograph can be mathematically analyzed to have several
intermediate operations which in itself forms a pipeline, which is a pipeline within the broader graphics pipeline. So the 4th stage viewing transformation itself is the pipeline which is a part of the overall graphics pipeline. Now this pipeline where we transform a 3D world coordinate scene to a 2D view plane scene is called viewing pipeline.
Now in this pipeline what we do? We first setup a camera coordinate system which is also referred to as a view coordinate system. Then the world coordinate scene is transformed to the view coordinate system. This stage is called viewing transformation. So we have setup a new coordinate system which is a camera coordinate system and then we transformed the world coordinate scene to the camera coordinate scene.
From there we make another transformation, now we transfer the scene to a 2D view plane. Now this stage is called projection transformation. So we have viewing transformation followed by projection transformation. For projection, we define a region in a viewing coordinate space which is called view volume.
For example, in the figure shown here, as you can see this frustum is defining a view volume, the frustum shown here is defining a view volume. So we want to capture objects that are present within this volume, outside objects we do not want to capture. That is typically what we do when we take a photograph, we select some region on the scene and then we capture it. So whichever object is outside will not be projected and whichever are there inside the volume will be projected. So here we require one additional process, a process to remove objects that are outside the view
volume. Now those objects can be fully outside or can be partially outside. So in both the cases we need to remove them. So when an object is fully outside we completely remove it and when an object is partially outside we clip the object and keep only the part that is within the view volume, the outside part we remove. The overall process is called clipping.
Also when we are projecting, we consider a viewer position where the photographer is situated
and in which direction he or she is looking at. Based on that position, some objects may appear
fully visible, some may appear partially visible, whereas the other objects will become invisible.
But all of them may be within the same volume.
For example, with respect to this particular view position, some objects may get, like this object if it is behind this object then it will be invisible. If it is partially behind, then it will be partially visible and if they are not aligned in the same direction, then both of them will be fully visible.
So you take care of this fact also before projection which requires some further operations, computations. So to capture this viewing effect, the operations that we perform are typically called hidden surface removal operations or similarly visible surface detection operations. So to generate realistic viewing effect along with clipping what we do is we perform the hidden surface removal or visible surface detection operations.
So after clipping and hidden surface removal operations, we project the scene on the view plane.
That is a plane define in the system, in the view coordinate system.
Now, there is one more transformation Suppose in the right hand figure, suppose this is the
object which is projected here in the view plane. Now the object may be displayed on any
portion of a computer screen, it need not to be exactly at the same portion as in the view plane.
For example, this object may be displayed in a corner of the display. So we will differentiate
between two concepts here; one is the view plane which is typically called a window, other one
is the display region on the actual display screen which we call viewport. So one more transformation remains in the viewing pipeline that is transferring the content from window to the viewport. So this is called the window-to-viewport transformation.
So in summary what we can say is that, in the 4th stage there are 3 transformations. What are
those transformations? First we transform from world coordinate scene to camera or view
coordinate scene. Then from camera coordinate scene, we perform the projection transformation
to view plane, then the view plane window is transform to the viewport. So these are the 3
transformations.
Along with those there are 2 major operations that we perform here; one is clipping that is
clipping out the objects that lie outside the view volume and the other one is hidden surface
removal which means creating a realistic effect, viewing effect with respect to the viewer
position. So that is the 4th stage.
So first we defined objects in the first stage, in the 2nd stage we combined those objects in the
world coordinate scene, in the 3rd stage we assigned colors to the object surfaces in the world
coordinate scene, in the 4th stage we transformed in the world coordinate scene to the image on
the viewport through a series of transformations which form a sub-section-pipeline within the
overall pipeline.
And those sub-pipeline stages are viewing transformation, projection transformation and
window-to-viewport transformation. This sub-pipeline is called viewing pipeline which is part of the overall graphics pipeline and in the 4th stage along with these viewing pipeline we also have to more operations performed that is clipping and hidden surface removal.
One more stage remains that is the 5th stage which is called scan conversion or rendering. Now
we mentioned earlier that we transform to a viewport. Now viewport is an abstract representation
of the actual display. In the actual display if you recollect our discussion on our raster displays,
we mentioned that the display contains a pixel grid.
So essentially the display contains locations which are discrete, we cannot assume that any point
can have a corresponding point on the screen. For example, if in our image we have a vertex at
location 1.5 and 2.5, on the screen we cannot have such a location because on screen we only
have integer values as coordinates due to the discrete nature of the grid. So we have either a pixel
located at 2, 2 or 3, 3 or 1, 1 or 1, 2 something like that rather than the real numbers 1.5, 2.5.
So we cannot have a pixel location at say 1.5, 2.5 but we can have pixel locations only at integer
value say 1, 1; 2, 2 and so on. So if we get a vertex in our image located at 1.5, 2.5 then we must
map it to these integer coordinates. That stage where we perform this mapping is called the scan
conversion stage which is the 5th and final stage of the pipeline. For example, consider these lines shown here, the end points are 2, 2 and 7, 5. Now all the intermediate points may not have integer coordinate values but in the final display, in the actual display we can have pixels, these circles only at integer coordinate values.
So we have to map these non-integer coordinates to integer coordinates. That mapping is the job of this 5th stage or scan conversion stage which is also called rasterization. And as you can see it may lead to some distortion because due to the mapping we may not get the exact points on the line, instead we may have to satisfy ourselves with some approximate points that lies close to the actual line. For example, this pixel here or this pixel here is not exactly on the line but the closest possible pixel with respect to the line.
So what is the concern? How to minimize the distortions? Now these distortions has a technical name which is called aliasing effect, from where this name originated we will discuss later. So our concern is to eliminate or reduce the aliasing effect to the extent possible so that we do not get to see too much distortions, we do not get to perceive too much distortions. To address this concern, several techniques are used which are called anti-aliasing techniques. These are used to make the image look as smooth as possible to reduce the effect of aliasing.
So display controller actually performs all these stages to finally get the intensities values to be
stored in the frame buffer or video memory. Now these stages are performed through software,
of course with suitable hardware support.
For a programmer of a graphic system, of course it is not necessary to learn about the intricate
details of all these stages, they are quite involves lots of theoretical concepts, lots of theoretical
models. Now if a graphics programmer gets brought down with all this theory, models then most
of the time will be consumed by understanding the theory rather than actually developing the system. So in order to address this concern of a programmer what is done is essentially development of libraries, graphics libraries.
So there is this theoretical background which is involved in generating 2D image. The
programmer need not always implement the stages of the pipeline to fully implement the theoretical knowledge, that would be of course too much effort and major portion of the
development effort will go into understanding and implementing the theoretical stages.
Instead the programmer can use what is called application programming interfaces or APIs
provided by the graphics libraries. Where these stages are already implemented in the form of
various functions and the developer can simply call those functions with arguments in their
program to perform certain graphical tasks. There are many such libraries available, very popular
ones are mentioned here OpenGL which is an open source graphics library which is widely used.
Then there is DirectX by Microsoft and there are many such other commercial libraries available which are proprietary but OpenGL being open source is widely accessible and useful to many situations.
Now what these libraries contains? They contain predefined sets of functions, which, when invoked with appropriate arguments, perform specific tasks. So the programmer need not know every detail about the underlying hardware platform namely processor, memory and OS to build a application.
For example, suppose we want to assign colors to an object we have modelled. Do we need to
actually implement the optical laws to perform the coloring? Note that this optical law
implementation also involves knowledge of the processors available, the memory available and

so on. So what we can do is instead of having that knowledge, we can simply go for using a
function glColor3f with an argument r, g, b. So this function is defined in OpenGL or the open
graphics library which assigns a color to a 3D point.
So here we do not need to know details such as how color is defined in the system, how such
information is stored, in which portion of the memory and accessed, how the operating system
manages the call, which processor CPU or GPU handles the task and so on. So all these complicated details can be avoided and the programmer can simply use this function to assign color. We will come back to this OpenGL functions in a later part of the lecture where we will introduce OpenGL.
Now graphics applications such as painting systems which probably all of you are familiar with,
CAD tools that we mentioned in our introductory lectures earlier, video games, animations, all
these are developed using these functions. So it is important to have an understanding of these libraries if you want to make your life simpler as a graphics programmer. And we will come back later to this library functions, we will discuss in details some functions popularly used in the context of OpenGL.