Loading

Module 1: Lighting

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Hello and welcome to lecture number 16 in the course Computer Graphics, we are currently discussing the different pipeline stages, pipeline means how the rendering of a 2D image on a computer screen takes place through the process of Computer Graphics.
Now, as we know, there are five stages; we already have discussed the first two stages, namely Object representation, and Modeling transformers. Currently, we are discussing Lighting or the third stage and after this, we will be left with two more stages to discuss the fourth stage Viewing pipeline and the fifth stage Scan conversion.
In the third stage Lighting, we deal with assigning colors to the surface points, the surface of an object. Now, in the previous couple of lectures, we have learned about the process of coloring that is, we learned about a simple Lighting model, and also we learned about Shading models. To recap, the Lighting model is a complex mathematical expression to compute color at a given point and it makes use of various components of lights that are there when we try to see a colored object.
Now, these components are ambient light, diffused reflection due to direct light source and specular reflection due to direct light source. And for each of these, we have learned models and these models in turn make use of the vectors, surface normal vectors or the viewing vector and the vector towards the light source, all these vectors are used to compute these components. And at the end, we sum up these three component contributions to get the overall color values which is expressed in terms of an intensity value.
Now, this Lighting model is complex and involves lots of operations. So, essentially it takes time in order to reduce computation time; we learnt about Shading models, where we do not compute color values using the Lighting model at each and every point, instead we compute values at a very small number of points, maybe a single point on a surface and we use interpolation techniques to assign colors to other points.
Now, this interpolation technique is much more simpler compared to the Lighting model computations. Now, these two techniques to assign colors are discussed in the previous lectures. One more thing remains that is how we map the computed intensity values either using the Lighting model or using the Shading models to a bit sequence, a sequence of 0’s and 1’s that the computer understands that will be the subject matter of today's discussion, Intensity Mapping. This is the third component of assigning color to an object surface.
Now, when we talk of Intensity Mapping, what we refer to? We refer to a mapping process, what it maps? It maps the intensity value that we have computed using the Lighting or the Shading model to a value that a computer understands that is a string of 0’s and 1’s.
If you may recollect, during the worked-out examples that we have discussed in the previous lectures, we have seen the computation of intensity values. And those values are real numbers typically within the range 0 to 1. Now, these values are supposed to be used to drive the mechanism to draw pictures on the screen.
In the introductory lectures, we have touched upon the basic idea of a graphics system. There we mentioned that through the pipeline stages we compute intensity values and these values are used to basically drive some electromechanical arrangement which is responsible for rendering or displaying a colored object on a computer screen.
As an example, we briefly touched upon the idea of cathode ray tube displays. So, if you may recollect, there what we said that the CRT displays consists of an electromechanical arrangement where there are electron beams generated which are supposed to hit some locations on the screen representing the pixel grid. Now, this generation of electron beams is done through an electromechanical arrangement consisting of cathodes and anodes and magnetic fields.
And this electromechanical arrangement is controlled by the values that we compute at the end of the pipeline stages. So, our ultimate objective is to use the values, intensity values and use them to drive the mechanism that actually is responsible for drawing colors on the screen or drawing pictures on the screen.
As we have already mentioned, in a CRT display, this picture drawing is done by an arrangement of electron guns, which emits electron beams, and there is a mechanism to deflect those beams to specific regions on the screen where phosphor dots are present. And when the beam hits the phosphor dots, the dots emit photons with particular intensity that is light intensity, which gives us the sensation of a colored image on a screen.
Of course, CRT displays are now obsolete. You may not be knowing about these displays nowadays, but there are lessons to learn from CRT displays. And at the end of this course, towards the end, we will learn about other displays where similar things happen, where we actually use the computed intensities to generate some effect on the screen which gives us a sensation of color. And this computed intensity values are used to drive the mechanism that generates those effects. We will talk about some display mechanisms at the end of this course, where we'll have dedicated lectures on Graphic Hardware.
Now, the point is, so, we are saying that this intensity values are supposed to drive a mechanism some arrangement which in turn is responsible for generating the effect of colored image. But if the intensity values are computed as a real number in a range of 0 to 1, how we make the computer understand the value because computers do not understand these real numbers they only understand digital values, binary strings of 0’s and 1’s.
A problem here is that any intensity value cannot be represented and used for the purpose of driving some arrangement to generate the visual effect of colored image on a screen and we need some way to represent the corresponding intensity values in the computer. Now, this presentation depends on the frame buffer how we designed the frame buffer.
And that is what we call the Mapping Problem. What is this problem?
Suppose, let us try to understand it in terms of an example, suppose, we have a graphics system which has a frame buffer where 8 bits are there for each pixel location that means, 8 bits are there to store intensity values for each pixel. Now, with 8 bits, how many colors we can represent that is 2 to the power 8 or 256 values, it means that for each pixel location we can assign any one of the 256 values as a color value. So, for that particular graphics device, we can say that any pixel can take at most 256 color values.
On the other hand, when we are computing the pixel colors, there is no such restriction, we can compute any value, any number between 0 to 1. So, that is essentially an infinite range of values. Note that this computation takes place with the help of Lighting or Shading models. So, on the one hand we have values that can be anything, which we get by applying the Lighting or Shading models real value between 0 to 1.
And on the other hand due to the particular hardware design, we can represent at most a restricted number of values for each pixel location, in our example it is 256 values. So, essentially we need to map this potentially infinite intensity values to the 256 values, this is the problem. So, given the set size, the size of the number of values that can be represented in a computer, we have to map the potential range of values to those restricted sets.
This is our mapping problem, where we have to keep in mind that we cannot use any arbitrary mapping because that may lead to visible distortion, our perception is a very sensitive and complex thing. If we arbitrarily decide the mapping, then we may perceive images in a different way then, ideally what should have been the case. So, this distortion evidence is another objective of our mapping.
So, we need to map and we need to map in a way such that this distortion is not there. How we can achieve this objective? Let us try to understand the scheme through which we can achieve this objective.
So, that is the Mapping scheme.
The core idea behind the scheme is that we need to distribute the computed values among the system supported values such that the distribution corresponds to the way our eyes perceive intensity difference. So, this is a slightly complex idea. Let us try to understand this in terms of some example.
Now, this core idea actually relies on our psychological behavior. How we perceive intensity differences
Let us take one example. Suppose, there are two sets of intensity values. In the first set, there are two intensities 0.1 and 0.11. So, the difference between the two intensities is 10 percent. In the second set also, there are two intensities 0.5 and 0.55, again here the difference is 10 percent. But, due to our psychological behavior, we will not be able to perceive the absolute difference
between the intensity values, the difference will look the same, although absolute values are different.
So, in first case we have two absolute values, although the relative difference between them is 10 percent. And in the second set, we have two absolute values which are different than the first set, but the relative difference is same 10 percent.
If we are asked to look at those two sets of values, we will not be able to perceive the difference between those values because of our psychological behavior, that we do not perceive the absolute differences, instead, we perceive the relative differences. If the relative differences are same, then we will not perceive any difference if in spite of absolute differences being there.
So, that is one crucial behavioral trait of us, we cannot perceive the absolute difference in intensity values only relative difference matters. Now, if that is the case, then we can utilize this knowledge to distribute the intensity values among the device supported intensity values. How we can do that?
It follows from our behavioral trait, that if ratio of two intensities is the same as the ratio of two other intensities, then we perceive the difference as the same. This is an implication of the psychological behavior that we just described. And using this implication, we can distribute the intensities, let us see how.
Recall that we are given a continuous range of values between 0.0 and 1.0. So, this is our range of computed intensity values computed using Lighting or Shading model. On the other hand, the device supports a set of discrete values, because the frame buffer is designed in that way. And
we are supposed to map this continuous range to that set of discrete values. This continuous range needs to be distributed into the finite set of discrete values.
And we can do that without distorting the image by preserving the ratios in the successive intensity values, if we preserve the ratio in the successive intensity values, then even if we are approximating a computed intensity to a device supported intensity, the resulting image will not appear to be distorted and this comes from the psychological trait that we have just discussed. That is our eyes are not designed to perceive absolute differences in intensities; instead, only relative differences matter.
So, based on this reasoning, we can come up with a mapping algorithm, a step by step process to map a computed intensity to one of the device supported intensities.
Let us assume that the device supports N discrete values for each pixel and let us denote these values by I0, I1, up to IN. So, there are it should be N-1. So, denoted by I0, I1, up to IN-1, there are N discrete values.
Now, we can use a particular device called a Photometer to determine the boundary values that is I0 and IN-1. Now, it means that we know the range of intensities supported by that particular system; this is called the Dynamic range, which is bounded by I0 and IN-1.
Now, the highest value that is IN-1 is usually taken to be 1.0 that is a convention used. So, the intensities range between I0 and 1 this is the range [I0, 1]. This is the dynamic range. And I0 value we can obtain by using the particular device called Photometer.
Now, we will apply the knowledge that we have just discussed that is to preserve the ratio between successive intensities, we must ensure the following that is a I1/I0 = I2/I1 … IN/IN-2 = a common ratio r. So, the ratio of the consecutive intensity values supported by the device should be the same.
In other words, we can express all intermediate values in terms of the lowest value. So, I1 we can represent by this expression rI0, I2 similarly, we can express by the expression r2I0, I3 to be r3I0, and so on.
So, in general, we can say that this equation holds that is Ik is equal to rk and I0 where I0 is the minimum intensity for k>0. Going along this line we can say IN is equals to rNI0. So, this equation holds for any intensity value supported by the device. Now, here you can notice that the
total number of intensity values supported by the device is represented by N+1 and IN is the maximum intensity value, I0 is the minimum intensity value.
So, then what we need to do as we already discussed, we have already determined the minimum value, and we assume the maximum value to be 1. Minimum value we determined using a photometer and we assuming maximum value to be 1. Then, using this equation, we can determine the value of r by solving the equation 1 = rNI0 where we know the value of I0 and we know the value of N from the total number of intensity values supported by the device. Then using this value of r, which we compute by solving this equation, we can obtain the N intensity values using this equation for any particular intensity value k.
Now, let us try to understand what to do next, what should be our next step. So, in the previous step we computed the value of r knowing the minimum value, maximum value and the total number of intensity values supported by the device. Then based on that we can compute any Ik. Now, suppose, using a Lighting model, we computed an intensity value for a pixel to be IP. So, we are denoting this intensity value by IP.
Now, we will maintain a table, in the table, what we will do, we will maintain the intensity values supported by the device, which we compute using the earlier equation. So, that is I0 which we get with photometer I1, I2 in this way to IN, then once we compute IP, we will try to see from
this table which value comes closest to IP. That is, we will try to find out the nearest value that is closest to IP.
Let us call it Ik. Now, for that value, we already have a bit pattern stored here in this table. Let us call it bit pattern 0, bit pattern 1, bit pattern 2, this way bit pattern N for the N+1 intensity values. So, for the kth intensity value in the table Ik we know the corresponding bit pattern. So, then we take that bit pattern and store it in the frame buffer. So, that is how we map value computed using a Lighting model to a bit pattern that represents a value supported by the device.
So, then in summary what we do, we determine the value of N and the minimum value I0 using photometer and assume IN to be 1.0 that is the maximum value to be 1.0. Then using the equation IN = rNI0, we solve for the value r. Then using the value of r we calculate the device supported intensity values. So, we know I0, then we calculate I1 to be r, I0, I2 to be r2I0, and so on.
And for each of these computed values, we keep a bit pattern. So, this is our table upto bit pattern for the maximum value. Then we compute for a pixel the intensity value using a Lighting model, map it to the nearest device supported intensity value by looking at the table and then we use the corresponding bit pattern to represent that computed intensity value and finally, we stored that value in the frame buffer. That is how we map computed intensity value to a bit pattern and store it in the frame buffer location.
(Refer Slide Time: 30:18)
Let us try to understand this whole process using one example.
Suppose, we have a display device, which supports a minimum intensity I0 as 0.01 and this value of course, as we mentioned earlier we found out with the photometer device. As usual, we are assuming that the maximum intensity value supported by the device to be IN equal to 1.0.
Let us assume that the device supports 8 bits for each pixel location. In other words, it has 8 bits to represent the color of a pixel then the total number of intensity values, which we can denote by M to be N+1 as discussed earlier supported by the device for each pixel is 28 or 256. So, M=256 that means N is 255. So, from I0 to I255 are the intensity values that will be supported by the device.
So, these intensity values we can denote with these notations I0, I1, I2 up to I255. Now, we can set up this equation based on the relationship that is IN= rNI0. Now, here we are replacing the values
IN, I0 and N to get this equation and we solve this equation to get the value of r. So, if you solve it, you will get the value of r.
So, solving this we get r =1.0182 and using this value, we get other intensity values in this way, so, I1 will be rI0 that is 0.0102. I2 will be r2I0, that is 0.0104, and so on. And we create a table of these values.
In this table also we assigned bit patterns. So, I0 we assigned 000, I1 we assign this bit pattern I2 we assign this bit pattern and so on up to this bit pattern for the last value. This is of course one possible mapping. Now, assignment of bit pattern can be arbitrary, it really does not matter because it has nothing to do with the preservation of ratio. But the actual calculation of these intensity values is what matters. This calculation is done based on the principle of preserving the ratios of successive intensity values. So, that the resulting image is not distorted.
Now, let us assume that we have computed some intensity values using the Lighting model at a pixel location and that value is 0.1039. So, this is our table, and we computed this value.
So, as per the algorithm what we should do, we try to find out the nearest intensity value that the pixel support. So, in this case, that is I2 or 0.104, and the bit pattern corresponding to I2 is this one. So, we store this bit pattern at the corresponding frame buffer location.
So, here you may note that the final intensity value that we represented and stored in the frame buffer is different from the actual value that is computed using the Light model because of the mapping. So, it means that there is some error, always there will be some error. Although with the preservation of the ratio of successive intensities, we can elevate the problem of visual
distortion in the resulting image. Still, there are ways to improve this selection of appropriate intensity representing a computed intensity.
And there are some techniques to do that, at different level, one is Gamma correction, other one is Error correction for intensity mapping through halftoning or dithering methods. However, we will not go into the details of these methods. The basic idea is that using these methods, we can actually reduce the effect that arises due to the introduction of mapping errors, the difference between computed intensity, and the intensity that we represent and store in the frame buffer. If you are interested, you may refer to the material that will be mentioned at the end of this lecture.
So, in summary, what we can say is that.
In stage three, there are three broad concepts that we have covered, what are these concepts.
The first is Lighting model. So, this is the basic method that we follow to simulate the optical properties and behavior which gives us the sensation of color. Now, lighting model is complex. So, in order to avoid complexities, we take recourse to Shading models. This is the second concept that we have learned. Shading models is essentially a way to reduce computation, while assigning colors to surface points, it makes use of lighting models, but in a very limited way and uses interpolation, which are less computation intensive to assign colors to surface points.
Then the third concept that we have discussed is Intensity Mapping. So, with Lighting or Shading model we compute color as a real number within a range of 0 to 1. So, any value can be computed. However, a computer does not support any value, it is discrete in nature. So, essentially discrete values are supported a subset of values of all possible values are supported.
For example, if we have 8 bit frame buffer that means each pixel location is represented by 8 bits we can support at most 256 intensity values for each pixel. A pixel color can be any one of these 256 values, whereas, we are computing color as any value between 0 to 1. So, we need to map it, this mapping is complex and it introduces some amount of error. This error may result in distortion.
However, to avoid distortion, we make use of one psychological behavioral aspect of our visual perception. That is, we distribute the computed or potential intensities among the device supported intensities in a way such that the ratio of the consecutive intensities remains the same.
If we do that, then this perceived distortion of the image may be avoided. However, in spite of that, we introduce some error which may affect the quality of the image.
In the next lecture, what we will do is we will discuss another important aspect of the third stage that is Color model. Along with that, we will also learn about Texture synthesis, both are part of the third state that is coloring. So, so far we have learned three concepts and two more concepts we will learn in the subsequent lectures.
Whatever we have discussed today can be found in this book. You may refer to Chapter 4, Section 4.5, to learn about the topics, and also you may find more details on the topics that we mentioned, but we did not discuss in details, namely the Error propagation techniques and Gamma correction techniques. That is all for today. Thank you and goodbye.