Loading

Module 1: Lighting and Viewing Pipeline

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Hello and welcome to lecture number 17 in the course Computer Graphics. Currently we are discussing the third stage of the graphics pipeline.We have already covered first two stages in our earlier lectures and currently we are discussing the third stage that is lighting or assigning colour to the surface points of objects in a scene.We have already covered 3 topics in this stage namely lighting, shading and intensity mapping. Today also we will continue our discussion on the activities that we perform in the third stage of the pipeline. So, two of the concepts that are part of third stage will be discussed today, one is colour model, other one is the texture mapping.And with the discussion on these two topics, we will conclude our overall discussion on the third stage of the graphics pipeline. So, before we start our discussion on the topics I would like you to understand the basic idea behind our perception of color, in order to do that we need to know the psychology and physiology of vision. How do we perceive?We mentioned earlier that color is a psychological phenomenon. So, it is essentially our perception that there is a color. Now, from where this perception comes? That is due to the physiology of our visual system or the way our eyes are made and the way they work. So, essentially the physiology determines the psychological effect.Let us try to go through the physiology of vision in a brief way. Look at the figure here, so this figure shows how our eye is organized. So, we have cornea that is at the outside of the eye there are other components, then we have pupil, iris, lens, retina, optical nerve and central phobia. So, when the light comes after getting reflected from a point suppose this the light ray. So, as the figure shows, the light rays that are incident on the eye passes through cornea that is the component here. Pupil component here and lens that is the component here and after passing through these, it comes to the backside that is the retina.Now, during its passage through these various components, it gets refracted by the cornea as well as the lens so that the image is focused on the retina. So, the lens and the cornea help to focus the image on the retina. Now, once the light rays falls on the retina, image is formed and then it is transmitted to the brain through optical nerve.Amount of light that is entering the eye is controlled by iris. This component and that is done through the process of dilation or constriction of the pupil. This is the pupil, so iris dilates or constricts the pupil to control the amount of light entering the eye.Now, I said that once the light ray falls on the retina, image is formed. How that is done? Now retina is composed of optical nerve fibers and photoreceptors they help in forming and transmitting image to the brain.Now, there are two types’ photoreceptors; rods and cones. Rods are more in the peripheral region of the retina, this region whereas, Cones are mainly in a small central region of retina called the phobia, this component. So, we have two types of photoreceptors rods and cones in retina which receives the light and help create the image.One more thing, now more than one rod can share an optic nerve, so there can be more than one rods for each optic nerve that is there in retina and connected through the rods. And the rods are there to help in one thing that is it aids sensitivity to lower levels of light, so when light is not very bright, we still manage to see things that is due to this presence of rod.On the other hand, in the case of cones, the other kinds of photoreceptors so there is more or less one optic nerve fiber for each cone, unlike the case of rod and it aids in image resolution or acuity.Now, when we get to see something with the help of cones that is called photopic vision and when we get to see something with the help of rods that is called scotopic vision. So, there are two types of vision, photopic and scotopic.And when we say we are seeing a coloured image, the fact is we perceive colors only in case of photopic vision. In scotopic vision, we do not get to see colors instead, we get to see series of grays or different gray levels rather than different colors. So, this is very important, we should remember that when we talk about colored images, that means we are perceiving colors, so we are talking about photopic vision only.Now, there is one more thing we should know that is the idea of visible light. So, when we see a color, as we have already discussed earlier, that is due to the light. So, light coming from some source gets reflected from the point and enters our eye and because of that we get to see colour at that point. Now, this light is called visible light. It allows us to perceive color.Now, what is this visible light? It refers to a spectrum of frequencies of electromagnetic waves, which are the light waves. Now, the spectrum means it is a range. At one end of the spectrum is the red light with the frequency mentioned here and 700-nanometre wavelength. And at the other end of the spectrum is violet light with a frequency and wavelength mentioned here.So, red is the component with lower frequency; violet is the component with higher frequency and all frequencies in between a part of the visible light. And red is the component with the highest wavelength, and violet is the component with the lowest wavelength and in between wavelengths are there in the visible light.Now, why we are calling it visible light? Because there are light waves with frequencies that are outside this range also but we are not calling that as part of visible light. That is for one simple reason. The frequencies that are present in the visible light spectrum are able to excite the cones in our eye giving photopic vision or the perception of color.So, these frequencies that are part of the visible light can excite the cones which gives the perception of photopic vision or coloured images. That is why we are calling this as visible light. Light waves that fall outside this spectrum do not have this property.Now, there are three cone types that are present in the retina. Three types of cone photoreceptors. One type is called L or R. From the name, you may guess, this type of cone ismost sensitive to red light. Then we have M or G, which are most sensitive to green light. Now, green light has wavelength of 560 nanometre. And then we have S or B, again this type of cones are most sensitive to blue light with a 430 nanometre wavelength. So, there are three cone types, each type is sensitive to a particular light wave with a specified frequency, we call these light waves as red, green and blue.Then how we perceive colour? So, when light comes, it contains all these frequencies. Accordingly, all the three cone types get stimulated and then as a combined effect of stimulation of all the three cone types, we get to perceive the colour. Now, this theory which tells us how we perceive colour is also known as the tristimulus theory of vision because the basis of it is the idea that there are three cone types and these three gets stimulated to give us the perception. So, it is called the tristimulus theory of vision. We will come back to this idea later.So, that is in a nutshell how we perceive colour. So, we have our eye constructed in a particular way having cone types in retina, three types of cone, these cone types gets stimulated with the frequencies present in the visible light, and then as a result, we get to perceive the colour.Now, that is the basic theory of how our eyes work and how we get to perceive colour. Now, with this knowledge, how we can actually be able to build a realistic computer graphics system. Let us try to see that, how this knowledge helps us in colour generation in computer graphics.This question brings us to a concept called Metamerism or Metamers. What is that?So, what we want in computer graphics? We are primarily interested in synthesizing colours. We are not interested in the actual optical process that takes place in giving us the perception of color. Our soul objective is to be able to synthesize the colour so that the resulting scene or image looks realisticWe can do that with the idea of metamers and the overall theory of metamerism. How we can do that?Now, let us revisit what we have just discussed. So, when a light is incident on our eye, it composed of different frequencies of the light spectrum including the visible light frequencies. Now, these visible light frequencies excite the three cone types, L, M, S or R, G, B in different ways. Now, that in turn gives us the sensation of a particular color. So, all three cone types get excited due to the presence of corresponding frequencies and this excitation is different for different incident light and accordingly we get to see different colours.But one thing we should keep in mind that when we say that we are getting a perception of a colour, the underlying process need not be unique. So, in eye it works in a particular way because there are three cone types and these three types gets excited in a particular way, give us the perception of color, but there can be other ways also to give us the perception of the same colour. In other words, the sensation of a colour C resulting from spectrum S1 can also result from a different spectrum S2.That means we have a light source, it gets reflected from a point and comes to our eye. It has one spectrum which excites the three cone types in a particular way and give us a sensation of a colour C. That does not mean that that is the only spectrum that can give us this particular sensation. There can be another spectrum which can excite the three cone types in a different way but at the end can give us the same colour perception. So, there are multiple ways to generate colour perception. It is not a unique way. And this is a very important knowledge we exploit in computer graphics.Now, this possibility that multiple spectrums can give us the sensation of the same colour is because of the optical behavior which we call metamerism. And the different spectra that result in the sensation of the same colour are known as metamers. So, metamerism is effectively the idea that different spectra can give us the sensation of the same colour and these different spectra are known as metamers.So then, what it implies? Metamers imply that we do not need to know the exact physical process behind colour perception. Because exact physical process may involve one particular spectrum say, S1. Instead, we can come up with an artificial way to generate the samesensation using another spectrum S2 which is in our control. So, this S1 and S2 are metamers and gives the perception of the same colour.So, you may not be able to know exactly what is the spectrum when we perceive a particular colour, but we can always recreate that sensation by using another spectrum which is a metamer of the actual spectrum.In other words, we can come up with a set of basic or primary colours and then combine or mix these colours in appropriate amounts to synthesize the desired color. So, a corollary of the metamerism concept is that we need not know the exact spectrum, instead what we can do? We can always come up with a set of primary colours or basic colours and then we combine these colours to get the sensation of the desired color, combine or mix these colours in an appropriate amount to get the sensation of the desired color.So, the idea boils down to finding out the set of basic or primary colours. Now, those sets are called color models. So, this brings us to the idea of color models, ways to represent and manipulate colors. So, basic idea is that we have a set of basic colors using which we can generate different colors by mixing them in an appropriate way with an appropriate amount. That is the idea of color models.So, the idea of metamers brings us to the idea of color models, which helps us to stimulate the idea of colored images. So, we can create the perception of a color using a color model which is supposed to be metamer of the actual spectrum.Thus, the question that we posed that is how we generate colours in CG, one way to do that is with the use of colour models without bothering about how the color is actually generated in the eye, without bothering about the actual spectrum that is responsible for giving us the perception.Now, there are many colour models. Let us try to understand the idea of color models in terms of the most basic of them all, namely the RGB colour model or Red Green Blue color model. Remember that we talked about three cone types, L, M and S. L get excited mostly by the red light, M by the green light and S by the blue light that are present in a light spectrum. We also mentioned that incident light excites these three cones in different ways that means they excite them in different amount which results in the photopic vision which gives us perception of color.Thus, we can think of color as a mixture of three primary colors, red, green and blue. And we need to mix these three colors in appropriate amounts to synthesize the desired color. Now, this idea that color is a mixture of three primary colors red, green and blue is called the RGB color model, which is the very basic of the color model. Now, the idea of this color model as you can guess comes directly from the way our eye work, that is there are three cone types and we excite them differently to generate the perception of different color by varying the light spectrum that is incident on the eye.The idea is illustrated here in this figure as you can see, these are the three component colors red, green, and blue. And when they are mixed in a particular way, for example, here blue and green are mixed, red is also mixed and we get a particular color here, here if we mix these amounts, we will get another color and so on. So, in RGB model, we use three primary colors, red, green, blue and we mix them to get any color. Now, the idea is to mix them. What you mean by mixing them?Here, when we talk of mixing, we mean we add the colors. So, RGB is an additive model. Here, any color is obtained by adding proper amounts of red, green, and blue colors.This is important to remember that this is an additive model. And how we can visualize this model? Is there any way to visualize this model? Now remember that there are three primaries. Thus we can think of a color as a point in a 3D color space. The 3 axes correspond to the 3 primary colors.Further, if we are assuming normalized color values, that means the values within the range 0 and 1 which is typically what we do when we use lighting models, we can visualize the RGB model as a 3D color cube as shown here. This is also known as the color gamut that is, set of all possible colors that can be generated by the model. So, the cube that is shown here contains all possible colors that can be generated by the RGB model.Now, we said RGB is a color model, which is an additive model. Now, there are other color models also. For example, there is the XYZ model, CMY model, HSV model. These models are used for different purposes and in different situations. And not all of them are additive; we also have subtractive models as well.However, in this lecture, we will not go into the details of any other model. If you are interested, you may refer to the material that will be mentioned at the end of this lecture. For more details on these different color models. Now, let us move to our other topic, that is, the synthesis of textures.Now, earlier, we talked about lighting model to compute color. One thing we did not explicitly mentioned that is, when we are computing intensity values using the lighting model, the overall surface when colored with the intensity values computed with the model appears to be smooth which is definitely not a realistic. Typically we get to see different types of surfaces, and the majority or almost all the surfaces are non-smooth.They have something else apart from a smooth distribution of color. Like as you can see here, this wooden plank in this figure, you see on the surface, there are various patterns. Now, these patterns cannot be obtained by applying the lighting model, as we have discussed earlier. When I say the lighting model, I also mean that the shading models as well because shading models are essentially based on the lighting model.So, the lighting or shading models that we have discussed earlier cannot give us the various patterns and other features that typically appear in reality on the surfaces. So, in order to achieve those realistic effects, various other techniques are used.Now, broadly there are three such techniques. And these techniques together are called texture synthesis. We want to synthesize the texture that appears on the surface. So, broadly there are three such synthesis techniques. One is projected texture; one is texture mapping; the other one is solid texturing.Now, let us start with the projected texture. The idea is very simple. So, when you say we have generated a colored image, that means we have created a 2D array of pixel values after, of course, the entire pipeline stages are completed and we have completed the fifth stage that is scan conversion also. Now, these pixel values are essentially values representing intensitiesor colors. Now, on this surface, we want to generate a particular texture, a particular effect, pattern.What we can do? We can create the texture pattern and paste it on the surface. So, two things are involved here; one is we already have a pixel grid with values that represents the colored surface without texture. Now, we are separately creating a texture pattern and pasting it on the pixel grid or the color values that are already computed using the lighting or shading model.Now, this projected texture method that means the creation and pasting of texture on the surface can be done using a simple technique. So, we create a texture image or a texture map from a synthesized or scanned image. So, either we can artificially create an image or we can scan or synthesize an image and use that as a texture map which is a 2D array of color values.Now, to differentiate it with computed color values we talked about earlier, this 2D array we called as texel array and each array element is a texel. So, we have a pixel grid representing the original surface, and we have a texel grid representing the artificially created texture patterns. Now, this texture pattern can be created either synthetically or by scanning an image. Now, there is a 1 to 1 correspondence between texel and pixel array.This is quite obvious, then what we do? We replace pixel color with the corresponding texel value to mimic the idea that we are pasting the texture on the surface. So, the first stage is we are creating the texel grid, which is creation of the texture pattern, then we are pasting by replacing the pixel values with the corresponding texel values where there is a one-to-one corresponds between the pixel and texel grid elements.Now, this pasting or replacement can be done in different ways, broadly three ways are there. The first thing is the obvious one, simply replace the value with the corresponding texel value. The second is slightly more complicated; here we are blending the pixel and the texel values using a formula shown here. We are using the addition for blending, and here C pixel indicates the pixel intensity, C texel indicates the texel intensity value, k is a constant between 0 to1.The third is also slightly complicated, the third approach, that is, we perform a logical operation either AND operation or an OR operation between the pixel and texel values. Now, remember that we store these values as bit strings in the frame buffer. So, then we can perform logical AND or logical OR operation between the two-bit strings, which will give us a new bit string, and that will represent the final color that is the projected texture pattern.So, this is the projected texture method. Here, we create the texture pattern separately, then paste it. There are three ways to paste; one is simply replacing the pixel value with the texel value. The second is using an operation called blending, and third is using a logical AND or OR operation between the two bit strings representing the pixel and texel intensity values. Either of these three, we can use to paste.There is one special technique used in the projected texturing method, also apart from the ones that we just discussed, this is called the MIPMAP technique, where MIPMAP stands for Multum In Parvo map or many things in a small space map. Multum In Parvo means many things in a small space. What is the idea?Earlier, we talked about creating one texture pattern. Now, in this MIPMAP technique, we are creating more than one texture pattern. In fact, we create a large number of texture maps with decreasing resolutions for the same texture image, and we store them. So, in our MIPMAP technique, we may store all these texture maps for the same image with different sizes, as shown in this figure. So, this is one image, this is another image, this is anotherimage, another image as you can see progressively the size is reducing, although the image remains the same. So, how we use this?Now, suppose we are given to generate something like this pattern. As you can see, the region closer to the viewer position is having bigger patterns here. As the distance from the viewer increases, the pattern sizes becomes progressively smaller as we move away from the viewer. So, in MIPMAP technique, we store these different sizes of the same pattern and simply paste it at appropriate regions of the image rather than creating a more complicated pattern. So, that is the idea of MIPMAP, and we do that in generating realistic texture effects.Next is the second technique, that is texture mapping. Here, we primarily use it for curved surfaces. Now, on curved surfaces, it is very difficult to use the previous technique. Simple pasting of texture does not work, and we go for a more general definition of the texture map.So, what we do there? We assume a texture map defined in a 2D texture space where the principle axes denoted by u and w and the object surface represented in the parametric form, usually by symbols Ɵ and φ. Of course these are one notation, there can be other notation as well.Then, we define two mapping functions from the texture space to the object space. These are the forms of the mapping function shown here. This is the one, and this is the other one.And in the simplest case, these mapping functions take the form of linear functions as shown here where there are four constants A, B, C, and D. So using this mapping functions we map a texture value defined in the texture space to a value in the object space and then use that value to create the particular pattern.Let us try to understand this in terms of one example. Consider this top figure, here we have shown a texture pattern defined in a texture space. Now, this pattern is to be pasted on this surface here, particularly in the middle of the overall surface, to create that effect, now in order to do that we need to map from this pattern to this objects surface space and what arethe mappings we should use, we will assume here that we are going for the simplest mapping that is the linear mapping let us try to estimate the mapping functions.Now, the specification of that surface is already given here in terms of the size information. So, using that information, we go for a parametric representation of the target surface area that is the middle of the square. How do we represent this? With this set of equations which is very easy, you can try to derive it yourself.Then with this parametric representation, we will make use of relationships between the parameters in the two spaces with respect to the corner points. For example, the point at 0 0in the texture space is mapped to the point 25 25 in the object space and so on for all the corner points listed here.So, with this mapping what we can do, we can substitute these values into the linear mapping functions, which we have seen earlier to get the constant values, which will be A = 50, B = 25, C = 50, and D = 25. So, our mapping functions are finally given in this form. Ɵ =50u+25, φ = 50w +25.So, that is the idea of the second category of texturing. Now, there is a third category of texture mapping technique, solid texturing. Now, this texture mapping is typically difficult in many situations where we have complex surfaces or where there should be some continuityof the texture between adjacent surfaces. For example, consider this wooden block here as you can see, there is a continuity between the textures in the surfaces, and unless this continuity is maintained, we will not be able to create the realistic effect. So, in such cases, we use solid texturing.I will just give you the basic idea without going into the details of this technique because this is slightly more complicated compared to the previous techniques that we have seen. Earlier, we have seen that we are defining a texture space in 2D, now we are defining texture in a 3D texture space where the principal axis are usually denoted by u, v, and w.Then we perform some transformations to place the object in the texture space that means, any point on the object surface is transformed to a point to the texture space and then whatever color is there at that particular point is considered to be the color of the corresponding surface point.So, here we are performing a transformation from object space to texture space, and then whatever color is already defined in that particular texture space point is used as the color of the surface point. But this transformation is more complicated than the mapping that we have seen earlier, and we will not go into further details of this transformation technique.So, in summary, what we have learnt so far let us quickly recap. So, with this lecture, we conclude our discussion on stage 3 of the pipeline that is coloring, and we covered threeconcepts, broad concepts that is the lighting model to compute color, the shading model to interpolate colors, which reduces computation and the intensity mapping to map between the computed intensity and the device supported intensity.Along with that, we understood the basics of color models and also how to create texture patterns, of course, these are very basic concepts. There are advance concepts which we did not discuss in this introductory discussion, and for more details, you may refer to the material at the end. In our next lecture we will start our discussion on the fourth stage of the pipeline that is the viewing pipeline, which itself consists of many sub-stages.Whatever we have discussed so far can be found in chapter 5, in particular section 5.1, 5.2.1, and 5.3 these three sections. However, if you are interested in learning about other color models as well as some more details on 3D texturing, then you may go through other sections as well. See you in the next lecture, thank you and goodbye.