Loading

Mega May PDF Sale - NOW ON! 25% Off Digital Certs & Diplomas Ends in : : :

Claim My Discount!

Module 1: Lighting and Viewing Pipeline

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Hello and welcome to lecture number 20 in the course Computer Graphics. We are currently discussing the graphics pipeline, that is the series of stages or steps that has to be performed to convert a 3D description of a scene to a 2D image on a computer screen or on that display that we get to see.So, there are five stages, as we have already mentioned object representation first stage, modelling transformations second stage, lighting third stage, these three stages we have already discussed completely. Currently, we are in the fourth stage viewing pipeline, and there will be one more stage fifth stage that is scan conversion.Now, the fourth stage, the viewing pipeline, contains a set of sub-stages. The first sub-stage is a transformation from a 3D world coordinated scene description to a 3D view coordinate scene description. Now, this view coordinate is also called an eye or camera coordinate system. And this transformation is generally called 3D viewing transformation, which we have already discussed in our earlier lectures.The second stage is projection, so we project the 3D view coordinate description onto the view plane.And this projection is performed through a transformation which is generally called projection transformation. This also we have discussed in our earlier lectures.There is a third stage in which we perform a mapping or transformation that is from the view plane we map this to a viewport which is defined on the device coordinate system. This is called the window to viewport mapping where the window is on the view plane, and viewport is on the device coordinate system. And this third stage we are going to discuss today.Now, before we discuss the mapping, we would discuss one important aspect of projection transformation that we did not discuss in our last lecture, that is the idea of the canonical view volume. Let us see what this volume means.As we mentioned earlier, there is an important stage in the graphics pipeline. In fact, this is part of the fourth stage that we are currently discussing that is viewing pipeline. Here, what we do is whatever objects are outside the view volume are clipped out. Now, that stage is called clipping. And we do it to remove all the objects that are outside the view volume.We already know what a view volume is, that is a region in the 3D space which we want to project. Now, if it involves lots of objects which are partly outside of view volume and partly inside or lots of objects that are outside the view volume, then we require a large number of calculations to determine which want to clip out.And this involves object surface, view volume boundary intersection point calculations. So where the object surfaces are intersecting with the view volume boundaries. So, if the number islarge, then such boundary calculations will be large, and these boundary calculations are not easy. They involve a lot of floating-point operations, and accordingly, the complexity is high.Now, if we have to perform such intersection calculations with respect to arbitrary view volume where we have no control on the boundary planes of the view volume, then this complexity is only to be more. So, we can expect a large number of computations, which is likely to take a large amount of time reducing the quality of the image as we will get to see flicker.In order to avoid that, we can use one simple idea, that is we can come up with a standardized definition of view volume irrespective of how the actual view volume looks. We can always convert it to a standardized view volume or a standard view volume. This is called a canonical view volume or CVV in short.Essentially it is a standard representation of view volume irrespective of the actual nature of the volume. Remember that there are two types of view volume. One is for parallel projection, that is a rectangular parallel pipe, and the other one is for perspective projection that is a frustum. Now both these can be converted to a standard form which we call canonical view volume, which makes the intersection calculations standard and easier to implement.So, for both parallel and perspective projection, the standardized view volume looks the same. However, the way to arrive at the standardized or canonical volume for both the projections for the two types of projections are different. So, let us start with the parallel projection. So, for parallel projection, the canonical view volume that we define is a cube within a specified range, which is -1 to 1 along with all the three axis X, Y and Z. And as I already mentioned, any arbitrary view volume can be transformed to the CVV simply by the scaling operation.So, suppose this is an arbitrary view volume defined in terms of its bounding planes, six planes, so we can always map it by scaling within this range along the X Y Z direction and correspondingly, we get the canonical view volume.In case of perspective projection, this transformation is slightly more complicated because here we are dealing with a view frustum and we need to convert it to a canonical view volume for parallel production that is the rectangular parallel pipe where the X, Y and Z extent of the bounding planes are within a specified range.So, what we can do here, we will just talk about the idea rather than going into the details, we can convert this arbitrary view frustum to the canonical view volume here, by applying shearing and scaling in sequence. As we can guess from the figures, that shearing is required to change the shape and scaling is required to change the size. So, when we apply this to transformations on the original view frustum, we get the canonical view volume, of course, here will not go into any further details than this basic idea.So, what we have learned? That we define a view volume and this view volume, we transform to a canonical view volume so that in later stages when we perform clipping, the calculations are easier to implement because we are dealing with a standardized definition of the view volume.(Refer Slide Time: 10:14)Let us revisit the sequence of transformations that we perform to project a point p in the world coordinates scene to a point on the view plane. Now, we mentioned that this is akin to taking a photograph that is we transfer it to view coordinate system, then take a projection. However, earlier, we mentioned these two steps.Now one third step is added. So first, we transform the world, coordinate point on the view coordinate system, as we have discussed in earlier lecture. And the next step is not projection. Instead, what we do is in this view coordinate description, we define a view volume, and this view volume is transformed to a  canonical view volume. And accordingly, the point is also transformed by applying the same set of transformations. So, the next stage is to transform the point in the view volume to a point in the canonical view volume; then the final stage is to perform the projection transformation that is, project the point in the canonical view volume on the view plane.So, these three steps, a transformation to view coordinate then transformation to canonical view volume and then projection transformation constitute the sequence through which we project a point in the world coordinate scene into a point on the view plane. Mathematically or in matrix notation that we are following, we can write this series of steps as shown here in this expression where this transformation represents the transformation to view volume.This one represents the transformation to canonical view volume, and this one represents the projection transformation. Since we are applying them in sequence, so we are following the right to left rule. So, the first transformation to view coordinate system, then transformation to canonical view volume and then transformation to view plane through projection.So, that is the idea of performing projection on the view plane. There is one more point to be noted. So far, what we mentioned that in projection 3D points are mapped to 2D. The implication is that we are removing the Z or depth component. However, it may be noted here at this pointthat while we implement the pipeline, this depth component is actually not removed, and why that is so?One operation that we perform in this fourth stage is called hidden surface removal. We will talk about this operation in details in a later lecture. The point to be noted here is that this operation requires depth information. So, the depth information after projection is actually not removed. Instead, this original depth information is stored in separate storage, which is called the Z-buffer or the depth buffer.So, we are actually not removing the depth information, although we are performing a projection instead, we are keeping it stored separately in the Z-buffer or the depth buffer. And this information is required to perform a later operation called hidden surface removal, which gives us a realistic effect in an image.(Refer Slide Time: 14:56)So, that is in short what we do during projection and how we project from a world coordinated scene to a view plane. Now there is one more stage. Let us go to that stage that is mapping from this view plane to a viewport on the device space.(Refer Slide Time: 15:21)So, far what we have discussed? We discussed steps to transform a point in world coordinate to a clipping window on the view plane. That means a region on the view plane on which we are projecting the objects that are part of the view volume.Now, also, you have shown that this is typically the near plane of the canonical view volume. So, this is our window or clipping window.(Refer Slide Time: 15:59)We can assume that for simplicity that the window is at 0 depth or Z equal to 0, that is just for simplicity, although in general, that is not an absolute requirement.It may also be noted that we are talking of canonical view volume that is X and Y extents must be within a fixed range irrespective of their actual position in the world coordinates scene, and because of this region where we are restricting everything within a fixed range, this canonical view volumes are standardized, and the clipping window that is defined on the near plane of the canonical view volume is often called a normalized window.So, here we are dealing with a normalized window where the extent of values are to be within a predefined range.Now, this view plane is actually an abstract concept, so accordingly, the clipping window is also an abstract and intermediate concept. We cannot see it; what we get to see on the screen is something different. The points that are there on the clipping window are to be shown on the screen. But the scene that is there in the window need not occupy the whole screen, for example, here.Suppose this outer rectangle defines a whole scene out of which we have projected this part defined within the clipping window. Now, this part can be displayed on any region of the screen and can be in any size now the region on which this part is displayed on the screen it is called the viewport.So, we have two concepts here, window, which is same as the clipping window, which is normalized. And objects are projected on this window, and we have the other concept viewport, which is defined in the device space with respect to the screen origin and dimensions. So, this viewport refers to a region on the device space where this projected image needs to be shown. Now, this region can be at any location on the space device space and can be of any size, irrespective of the size of the clipping window. So, what we need? We need to map from this window to the viewport.So, it requires one more transformation to transfer the points from the window to the viewport.So, let us see how we can perform this transformation. So what we want, suppose this is our window, and this is our viewport, note that here we are not using this normalized range. We are formulating the problem in a very generic scene where this Wx and Wy can take any value. So, Wx, Wy is a point on the window, and we want to map it to a point on the viewport Vx, Vy.So, how we can do that? The first thing is that we have to maintain the relative position of this point with respect to its boundaries, so the same relative position has to be maintained in the viewport, so if we want to maintain that, then we get relationships like the one shown here between the window dimensions and the viewport dimensions.Now, this expression can be simplified in this form. So, we can represent the X coordinate of the point in the viewport in terms of the X coordinate of the point in window and these two constants, which are defined here in terms of the window and viewport sizes.Similar relationship we can form between the Y coordinate of the point in the viewport and the Y coordinate of the same point in the window by again, forming, the relationships first between the y coordinates and then simplifying and rearranging to get this relationship where Vy is the Y coordinate of the point in the viewport, Wy is the Y coordinate of the point in the window.And these two are constants defined here in terms of, again, the window and viewport sizes.So, using those expressions, we can actually form that transformation metrics as shown here. So, this is the metrics to transform this window point to the viewport point.And we will follow the same rule that is to get the transform point will multiply the original point with the transformation metrics as shown here. Note that here again, we are dealing withthe homogeneous coordinate system since these are two-dimensional points, so we have three-element vectors and three by three matrices.And at the end, we need to divide the obtained coordinates here with the homogeneous factor as shown here, to get the transformed points. The approach is similar to what we have seen earlier. So, that is the basic idea of how to transform from a point in the window or the clipping window to a point in the viewport, which can be anywhere on the device space.Now, let us try to understand the concepts that we have gone through so far in terms of illustrative examples.(Refer Slide Time: 24:38)So, in our earlier lecture, we have come across this example where we mentioned one object, shown here and a camera position view up direction, everything has been mentioned, and we computed the transform centre point of the object in the view coordinate system.So, we will not go into the details of how we calculated that again, let us just mention the transform point. That is 0, 0, - 1 that we got after applying this viewing transformation.Now let us assume that the view plane is located at Z equal to - 0.5. And we want parallel projection. So, what would be the coordinate of the object centre after the projection? Assuming that the view volume is sufficiently large to encompass the whole transformed object.So, our parallel projection transformation matrix is given here, and we know D is 0.5. So, this is our parallel projection matrix.So, if we use these matrix and perform the matrix multiplication here. The projection matrix and the point vector. Then we get this point as the projected point on the view plane. Now here, since the homogeneous factor is 1, so our point is directly obtained.Now, let us consider perspective projection. Earlier, we considered parallel projection, what will happen if we now consider the perspective projection with the same view plane? So, what would be the new point after projection?So, the transformation metrics for perspective projection is shown here. We know the value of d replacing d in this, we get our projection metrics. And with this matrix, what we do?We multiply it with the point vector as before, as shown here, so after multiplication, we get this transform point in a homogeneous coordinate system. Now note here that the homogeneous factor is not 1, earlier I mentioned that in projection, particularly perspective projection, we get homogeneous factors that are not 1, so there we need to be careful in concluding the final transformed point we have to divide whatever we got with the homogeneous factor. So, after division, we will get this point, or this is the final point that we get after perspective projection applied on the central point.So, we have performed projection. Now let us try to see what happens if we want to perform this window to viewport transformation. Now, let us assume that we projected the point on a normalized clipping window. And this projected point is at the centre of the normalized window. Now we are defining a viewport with a lower-left corner at (4, 4) and the top right corner at (6, 8).So, that means if this is our viewport, then the lower-left corner is this 1. This is (4, 4) and top right corner is (6, 8). So, if we perform a window to viewport transformation, then what would be the position of the point, the same central point in the viewport? Let us try to derive that.Now, we already mentioned that the clipping window is normalized, so the values or the extent of the window are fixed. And we get these values. So, this is between - 1 to 1, and again, this is also between - 1 to 1. So, we get these values, and from viewport specification, we can see that this is (4, 4). So, this 4 and this is so this point is (6, 8) then this must be 6. This must be 8. So, we get these values. So, next, we simply replace these values in the transformation matrices that we have seen earlier.We first compute the constant values sx, sy, tx, ty by using those values that we have seen earlier to get these results. Sx is 1, sy is 2, tx is 5 and ty is 6.So, the transformation matrix can be obtained by replacing the sx, sy, tx, ty values in this original transformation matrix which gives us this matrix. So, this will be our window to viewport transformation matrix.Now, once we obtain the transformation matrix, then it is easier to obtain the transformed point in the viewport by multiplying the transformation metrics with the point vector to obtain the transform point in homogeneous coordinate. Now, here again, the coordinate factor is not 1. So, we have to divide these values with the homogeneous factor as shown here and here, which eventually gives us the point (5, 6).So, this will be our transformed point after we apply the window to viewport transformation. So, this is how we get a transformed point in the viewport. Now in this example, you probably have noted that we have defined viewport irrespective of the window description, we can define it anywhere in the device space.What we need is basically a transformation also, you probably have noted that the viewport size has nothing to do with the window size, the window is normalized, whereas the viewport is not normalized. So, we can define any size by specifying its coordinate extents, and through mapping, we get that transformed point. So, this gives us the flexibility of placing the projected image anywhere on the screen with any size.So, in summary, what we have discussed so far are three sub-stages of the viewing pipeline stage. So, these three sub-stages are view transformation, projection transformation and viewport transformation. Just to recap, so these three sub-stages are used to simulate the effect of taking aphotograph. So, when you take a photograph, we look at the scene through the mechanism provided in the camera so that we mimic by performing the viewing transformation, we transform the world coordinate scene to a 3D view coordinate system, which is actually equivalent to watching the scene through the viewing mechanism of the camera. Then, we take a photo that means we project it on the view plane that is done through projection transformation.And finally, we display it on the screen, which is of course not part of the photograph analogy, but we do it in computer graphics so that stages mimicked with the use of windows to viewport transformation. This transformation is required to have the flexibility of displaying the projected image anywhere on the screen and with any size, irrespective of the size of the clipping window. In the fourth stage, apart from these three sub-stages, which are related to three types of transformations, there are two more operations that are done.We already mentioned it in this lecture. One is clipping one is hidden surface removal. So, these two operations we will discuss in our subsequent lectures.Whatever I have discussed today can be found from this book, Computer Graphics. You can go through Chapter 6, the section 6.2.3 and 6.3 this section is on the topic of canonical view volume, and this section discusses in detail the window to viewport transformation. That is all for today. Thank you and goodbye.