Loading

Mega May PDF Sale - NOW ON! 25% Off Digital Certs & Diplomas Ends in : : :

Claim My Discount!

Module 1: Scan Conversion

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Hello and welcome to lecture number 25 in the course Computer Graphics. We are currently discussing the 3D graphics pipeline which consists of five stages. Let us quickly recap the stages.
As you can see in this figure, first stage is object representation, then we have modeling transformation, then lighting or the coloring of objects, then viewing pipeline, and the fifth stage is scan conversion. Among them, we have already discussed the first four stages namely, object representation, modeling transformation, lighting, and the viewing pipeline.
Now we are going to discuss the fifth stage that is rendering or also known as scan conversion. So what is this stage is all about?
Let us have a look at the very basic problem that we try to address in this fifth stage.
So far, whatever we have learned that gives us some idea of one particular thing. Through these four stages that we have discussed so far, we can transform a 3D scene to a 2D viewport description, which is in the device coordinate system. Just quickly have a relook at how it is done as we have learned in our previous discussions.
So first, we have a 3D scene say, for example, this cube. Now, this cube is defined in its own coordinate system or local coordinate, which we do in the first stage that is object representation. Then what we do, we transfer it to a world coordinate system through modeling transformation.
So the object is now in world coordinate. So this is stage one. And in second stage through modeling transformation, we transfer it to world coordinate description which is stage two.
Then we assign colors by using the lighting, shading models. So we color the object in stage-3 and after that, we perform viewing transformation, which is stage four in which we transfer it to a viewport description. So it involves first, transferring the world coordinates into a view coordinate system, then from view coordinate, we perform projects and transformation and transfer it to view plane, then from view plane, we perform a window to viewport mapping to transfer it to this viewport, which is in the device coordinate system.
So these three transformations take place in stage four along with, of course, clipping and hidden surface removal. And after that what we get is a 2D representation of the object or scene on a viewport, which is in the device coordinate system. This is how things get transformed from object definition to viewport description.
However, the device coordinate system that we are talking about is a continuous system that means the coordinate values of any point can be any real number. So we can have coordinate like 2, 3, which are integers, whereas, we can also have a coordinate like 2.5, 3.1, which are real numbers. So all sorts of coordinates are allowed in device coordinate system.
In contrast, when we are actually trying to display it on a screen, we have a pixel grid that means it is a discrete coordinate system. So all possible coordinates are not allowed, instead, we must have something where only integer coordinates are defined. So whatever we want to display on the pixel grid must be displayed in terms of integer coordinates, we cannot have real coordinate values.
Thus what we need? We need to map from the viewport description, which is a continuous coordinate space to a pixel grid, which is a discrete coordinate space. So this is the final mapping that we need to do before a scene is rendered on a physical display screen. Now, these mapping algorithms or the techniques that we use for mapping are collectively known as rendering or more popularly, they are called scan conversion or sometime rasterization as we are mostly dealing with raster scan devices. So all these three terms are used, rendering, scan conversion, or rasterization.
So what can be the straightforward approach to do this? You may think it is pretty simple. What we can do is simply round off the real coordinates to the nearest integer coordinates. For example, if we have a coordinate value like (2.3, 2.6), we can round it up to (2, 3) that is the nearest integer coordinate values.
However, this may be good for converting points or mapping points from continuous coordinate space to discrete coordinate space, however, same scheme may not be good for lines, circles, or other primitive shapes that are required for rendering a scene. Now, let us try to understand how then we can take care of scan conversion of lines, circles, or other primitive shapes.
Let us start with line scan conversion, how we can basically map a line defined in a continuous coordinate space to a line defined in a discrete coordinate space.
We will start with a very simple and intuitive approach and then, we will try to understand the problem with the intuitive approach, and then, we will introduce better and better approaches. Now, we all know that we can define a lightened segment in terms of its endpoints. So to scan convert, what we need? We need to first map the end points on the line to the appropriate pixels and also the other points that are on the line to the appropriate pixels.
Now, let us go through a very simple approach, how we can map the points that are on the line to the nearest pixels in the pixel grid.
So we can follow a four-step approach. In the first step, we map the end points to pixels simply by rounding off to the nearest integer. In that way, we get the starting and ending pixels for the line segment. So we now know which pixels are defining the line. Then in the second step, we take one endpoint having the lower x and y values as the starting point. In the third step, we work out the y value for successive x values.
Now, since we are dealing with a pixel grid, we know that pixels are separated by unit distances. So the successive x values will have a value difference of 1, so these successive values will differ by 1. In the fourth step, this computed y values are mapped to the nearest integer giving us the pixel coordinates of that particular point.
So we first convert the end points to the nearest pixels, then we choose the end point having lower x and y values as the starting point, and starting with that point, we compute y value taking as input the x value, where the successive x values differ by 1 and we continue this till the other endpoint. So we compute all the y values between the starting and ending pixels. And these computed y values are then mapped to the nearest integer values, giving us the pixel coordinates for those points. Let us try to understand this in terms of one example.
Suppose this is our line segment, as shown in this figure. So this is one endpoint, this one is another endpoint. Initially, the line was defined by these two endpoints, A and B. As you can see, both are real numbers. So we have to map it to the nearest pixel.
For A, if we do the rounding off, we will get this pixel as the nearest pixel, and for B will get this one as the nearest pixel if we perform the rounding off. Now, we can see that coordinates of A’ is less than B’. So we start with A’, we choose it as our starting pixel. So we start with this pixel and continue finding out the y values till we reach this other pixel, other endpoint.
Now, our objective is to compute the y values for successive x values. For that, we require the line equation which involves computation of the slope m, which in our case turns out to be this.
Because we know the two endpoints we can compute the slope and also, the y-intercept value. We are assuming here that the line is expressed in terms of this equation, y = mx + b, where m is the slope, and b is the y-intercept. Given the two endpoints, we can solve for m and b and find that m is 3 by 5 and b is 4 by 5.
Then what we do? For each x separated by unit distance, starting from the lower end pixel that is x value is 2, we compute the y values using the line equation till we reach the other endpoint. And the line equation is given here. So in this equation, we use the x values to get the y values.
If we do so, what we will find? So for x =2, we have y=2; when x is 3, so the next pixel, x coordinate, we get y is 2.6; when x is 4, then we get y to be 3.2; x=5, y=3.8; x=6, y=4.4. So we get the four intermediate pixels and corresponding y values computed using the line equations.
So the four points are (3, 2.6), (4, 3.2), (5, 3.8), and (6, 4.4). These are the four values that we compute using the line equation. Now, we map this y values as shown here to the nearest integer to get the pixel locations. So if we do the rounding off, we will get the four pixels to be (3, 3) which is here; then (4, 3), which is here; (5, 4) here, and (6, 4) here. So these are our four intermediate pixels corresponding to the points on the line.
Now, with this approach, as you can see, the way we have computed the values and ultimately found out the pixels, there are two problems broadly. First problem is we need to perform multiplication of m and x. Now, m is likely to be a real value, so that is a floating-point operation. Secondly, we need to round off y coordinate values that is also floating-point operation. Together these floating-point operations are computation intensive.
So we have a computationally expensive approach to convert a line to corresponding pixels. In reality, we need to scan convert very large number of lines within a very small time. Now, if we have floating-point operations involved, then this process will become slow and we will perceive flickers, which is of course something which we do not want. So what do we need, we need some better solutions. Let us have a look at it slightly better approach.
But before that, I would like to point your attention towards another important topic that is consideration of slope of a line when we are performing this scan conversion.
In our example, what we did, we calculated the y coordinate values for each x coordinate value. So we increased x by 1 and corresponding y values, we calculated using the line equation. You may wonder why we did so, we could have similarly done it the other way around. We could have increased y and calculated the x values. Let us try to see what happens if we do so if we calculate x values by increasing y values.
Now, we have these two endpoint pixels and the lower pixel is the starting point denoted by A’.
This time we are increasing y by 1. That means we are moving this way. Earlier, we were moving along this direction, now we are moving this way from one scan line to the next. And then, we calculate the x based on the equation. Now, we need a modified equation which is given here and we already know b and m, the y-intercept and slope respectively. So we simply replace the y value to get the x value.
Now, if we do so, we will see that we have only two successive y values, this one and this one between y=2 and y=5. So earlier, we computed four x values, this time we required to compute
only two values because we have only two increases of y between the endpoints. So we are required to calculate two x values. So when y=3, then the x turns out to be 3.7 using the equation, and when y=4, x is 5.3.
Then what we have computed between the two end points? Two new points, (3.7, 3) and (5.3, 4). If we round it off to the nearest integers, then we get the pixels (4, 3) and (5, 4); let us place it here. So (4, 3) is this point and (5, 4) is this point. Note that earlier, we got two additional points when we moved along x-direction, now we are getting only two points, these two, when we are moving along y-direction and computing x.
Clearly, the first set that is these 4 plus the 2 endpoints, total 6, pixels will give us a better approximation to the line compared to the second set consisting of total 4 pixels, the two endpoints, and the two newly computed pixels. So you have the first set, which is better than second set because the approximation is better due to the larger number of pixels.
Now, that is the issue here. How do we decide which coordinate to calculate and when? Should we start with x and calculate y, or should we increase y and calculate x? Now, this decision is taken based on the slope of the line, depending on the slope we take a call.
If the slope is within these ranges, then we work out or calculate y values based on x coordinates of the pixels. So you increase x by 1 and compute the y values when m is within this range. If m is not within this range, then we compute x by increasing y coordinates. So that is our rule. So when m is within the ranges given here, then we compute y based on x, where x indicates the pixel coordinates that means integers. Otherwise, that means when m is not within this range, we compute x based on y where y indicates the pixel coordinates in integers. So that is how we make the decision.
Now, let us go back to a better line scan conversion algorithm compared to the simple approach that we have learned earlier. So this approach is called DDA or digital differential analyzer.
DDA stands for digital differential analyzer, and this is an incremental approach which is developed to reduce floating-point operations. That means to increase the computation speed, speed up the scan conversion process.
Let us try to first understand the idea. We will use the same example that we have seen earlier but this time, we will note a few more points. So earlier, we computed 4 points between the two end points (2, 2) and (7, 5) by increasing x and computing y values. Now, these 4 points are (3, 2.6), (4, 3.2), (5, 3.8) and (6, 4.4). Now, we will have a closer look at these points, what they tell us.
We computed that slope is 3/5 or 0.6. Now, the successive y values are actually addition of this slope value to the current value. The first value that we got is 2.6. Second value that we got is 3.2, which we can get by adding 0.6 that is the slope value to the earlier value that is 2.6.
Next value we got is 3.8, which is again the earlier value plus the slope. Finally, we got 4.4, which is again the earlier value plus slope. So there is a pattern. We add the slope value to the earlier value to get the new value. This idea is exploited in this DDA algorithm.
So instead of computing y with the line equation every time, we can simply add m to the current y value. That means the new y we can get by adding m to the current value. So we do not need to go for solving the line equation every time we want to compute the y value.
What is the advantage of that? It eliminates floating-point multiplication which is involved in this computation that is m into x. So we can eliminate these calculations which in turn is going to reduce the computational complexities.
Now, as I said earlier, slope is an important consideration here. So when the slope is not within the range that means the slope is greater than 1 or less than minus 1, then we do not compute successive y values, instead we compute x values. Again, in a similar way that is new value is the old value plus a constant term, which in this case is 1/m, earlier it was only m. So we can obtain the new x value by adding this constant 1/m to the current value. And here also, by this, we are eliminating floating-point operations.
So the complete algorithm is shown here. The input is the endpoint, the two endpoints are the input. And the output is the set of all pixels that are part of the line. So we compute m. Now, when m is within this range, we compute successive y values as shown here, by adding m to the current y value, round it off to get the pixel, and add the pixel to the set. And when m is not within this range, we compute successive x values by adding 1/m to the current value and perform the same steps again.
So we continue in both the cases till the other end point as you can see in this loop termination conditions. So that is how we improve on the simple lines scan conversion approach by exploiting one particular property that is, we can compute the successive x or y values by simply adding a constant term.
This is clearly some improvement over the simple approach. However, there are still issues.
With the DDA algorithm as we have noted, we can reduce floating-point operations, but only some of those floating-point operations. We cannot remove all, we can only reduce multiplications.
That still leaves us with other floating-point operations, which are addition and rounding off. Now, any floating-point operation is computationally expensive and it involves additional resources. So when we, in reality, require to generate large number of line segments in a very short span of time, our ideal objective should be to eliminate all floating-point operations
altogether, rather than eliminating few. Eliminating few, of course, improves the overall rendering rate, but eliminating all should be our ultimate objective.
That is so, since, for large line segments or large number of line segments, this floating-point operations may create problem. Particularly, when we are dealing with a very large line segment, the rounding off may result in pixels far away from actual line, for example, consider a very big line like this. Now, if we perform rounding off then we make keep on getting pixel, like, something like this. This actually looks distorted line for large line segments. For small segments, this may not be the case, but for large line segments, there is a possibility of visible distortion, which of course, we do not want.
So whatever we have discussed today can be found in this book, Computer Graphics. You are advised to go through Chapter 9 up to Section 9.1.1 to know in more details whatever we have
discussed. So the improved line drawing algorithm will be taken up in the next lecture. Till then, thank you and goodbye.