Loading

Mega May PDF Sale - NOW ON! 25% Off Digital Certs & Diplomas Ends in : : :

Claim My Discount!

Module 1: Graphics Hardware and Software

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Hello, and welcome to lecture number 29 in the course Computer Graphics. So, before we go into today's topic, we will quickly recap what we have learned so far.Now, till today, we have covered the stages of the 3D Graphics Pipeline. We completed our discussions on the pipeline stages. Today and in the next few lectures, we are going to look into its implementation that means, how the pipeline stages are implemented.So, in these lectures on pipeline as well as the lectures that preceded the pipeline discussion, what we have learned?We can summarize the learning as the fundamental process that is involved in synthesizing or depicting an image on a computer screen, that is what we have learned so far in the process.Now in this process, there are several stages. So, the process starts with abstract representation of objects, which involve representing points or vertices, lines or edges and other such geometric primitives, that is the first thing we do in executing the process. Next, the subsequent stages of the pipeline are applied to convert this representation to a bit sequence, sequence of 0s and 1s.And then this sequence is stored in this frame buffer location, and the content of the frame buffer is used by video controller to activate appropriate pixels, so that we perceive the image, that is the whole process. We first define some objects or in other words, we define a scene, then we apply the pipeline stages on this definition to convert it to 0s and 1s, and then these 0s and 1s are stored in a frame buffer. The frame buffer values are used by the video controller to activate appropriate pixels on the screen to give us the perception of the desired image.So far, we have discussed only theoretical aspects of this process, that means how it works conceptually. But we did not discuss on how these concepts are implemented. And today and next few lectures, we will do that, that will be our primary focus, how the concepts that we have discussed to understand the process are implemented in practice.So, what we will learn? We will learn the overall architecture of a graphics system, how it looks. Then, we will have discussion on display device technology. We will also learn about graphics processing unit or GPU in brief. Then, we will mention how the 3D pipeline is implemented on graphics hardware.And finally, we will learn about OpenGL, which is a library provided to ease graphics software implementation. So, we will start with how a graphic system architecture looks. Remember that we have already introduced a generic system architecture in our introductory lectures. We will quickly recap and then try to understand it with the new knowledge that we have gained in our previous discussions.So, if you may recollect, so in the generic architecture, generic system architecture, we have several components as shown in this figure. So, we have the host computer, which issues commands and accepts interaction data. Now, we have display controller, which is a dedicated graphics processing unit, which may take input from input devices. Then the output of this display controller is stored in video memory. And this video memory content is used by the video controller to render the image on the screen, that is what we have briefly learned about earlier.But as may be obvious, the terms that we used were very broad, they give some generic idea without any details.In the last few lectures, we have learned about new things, how the pipelines are organized and what are the algorithms, what they do. So, in light of that new knowledge, let us try to understand the relationship between these hardware components and the pipeline stages. Let us assume that we have written a program to display 2 objects on the screen.It is very simple image having only a ball and a cube, something like this. So, this is the screen here we will show a ball, maybe with some lines and a cube. So, we want to showthese two objects as an image on the screen, and we have written a program to do that. Then let us try to understand with respect to the generic architecture, what happens.Once the CPU detects that the process involves graphics operations, because here display is involved, it transfers the control to display controller. In other words, it frees itself from doing graphics related activities, so that it can perform other activities. Now, the controller has its own processing unit separate from CPU, which is called GPU or graphics processing unit. We will learn in more details about the GPU in a later lecture.Now, these processing units can perform the pipeline stages in a better way. So, there are specialized instructions using which the stages can be performed on object definition by the GPU to get the sequence of bits. So essentially, conversion of the object definition to sequence of bits is performed by GPU with the use of specialized instructions.Now, this bit sequence is stored in frame buffer, which we have already mentioned before. In case of interactive systems, where user can provide input, frame buffer content may change depending on the input that is coming from the input devices.But we must keep in mind that frame buffer is only a part of the video memory. It is not the entire video memory. We also require other memory to store object definitions as well as to store instructions for graphics operations, that means the code and data part. So, that is what constitute video memory, we have frame buffer as well as other memory to store various things.Now, how to organize this memory? There are 2 ways. We can integrate the memory in the generic architecture as shared system memory, that means a single memory shared by both CPU and GPU. Clearly here, to access memory, we need to use the common system bus as shown in this figure. So, the execution may be slower.So, in this figure, as you can see, we have CPU and GPU here as part of display controller, and we have a common system memory which both of them access through this. So, if GPU wants to access it, it will leave the system bus, if CPU wants to access it, it will leave the system bus, and accordingly, it may be slow.Otherwise, we can have dedicated graphics memory, which can be part of this graphics controller organization. As shown here, as you can see, we have this display controller, which has exclusive access to this dedicated graphics memory or video memory. This memory has 2 component, one is the memory containing other things and one is the memory called frame buffer. And here, there is no need to access the shared memory through system bus, common system bus, so it is faster as compared to the previous scheme.Now, once the data is available in the frame buffer, video controller acts on the framebuffer content. Now, acting means it maps to activation of corresponding pixel on the screen, the framebuffer content is mapped by the video controller to activation of corresponding pixels on the screen. For example, in case of CRT, activation refers to excitation as we have seen earlier by appropriate amount of corresponding phosphor dots that are there on the screen.Now, how to choose the appropriate amount? This amount of excitation is determined by electron beam intensity, which in turn is determined by voltage applied on electron gun, which in turn is determined by the frame buffer value. So, this is how this frame buffer value affects the amount of excitation in case of CRT, and similar thing happens with respect to other devices as well.So, that is in summary, how we can understand the generic system architecture in light of the stages that we have learned. So, we can relate the stages to the ultimate generation of image on the screen at a very broad level as we have just discussed. Now, let us try to have a more detailed understanding of different graphics hardware and software. So, we will start with graphics input and output devices.Let us start with the output devices. Now, as we all know, whenever we talk of graphics output device, immediately what comes to our mind is the video monitor or the so-called computer screen. But there are other output devices as well. For example, output also means projectors, we project the content. Of course, as we all know, both can be present together in a graphics system, both the monitor as well as a projector.In addition, there may be a third mode of output that is hardcopy output. We are already familiar with them, one is printer, other one is plotters. Also, nowadays, we have wearable displays such as head mounted displays HMDs, which are not traditional computer screens, but they also provide a way to display output. So, there are outputs available in different ways.In this lecture, we will talk about video monitors and hardcopy outputs, namely printers and plotters in brief. We will start with video monitor.Now, whatever screens we see nowadays are all called flat panel displays. This is a generic term used to represent displays that are flat as compared to earlier CRTs, which used to be bulky. So, they are thinner and lighter compared to CRTs of course and useful for both non portable and portable systems. And they are almost everywhere, desktops, laptops, palmtops, calculators, advertising boards, video-game console, wristwatch and so on. Everywhere, we get to see flat panel displays. Now, there is a wide variation in these displays.Flat panel effectively is a generic term, which indicates a display monitor having a much reduced volume, weight and power consumption compared to CRT. So, whenever we are talking of flat, it has to be understood in the context of CRT.Now there are broadly two types of flat panel displays, one is emissive display, other one is non-emissive displays.In case of emissive displays, they are often known as emitters, what happens is that these displays convert electrical energy into light on the screen. Examples are plasma panels, thin-film electroluminescent displays, light emitting diodes or LEDs, these are all emissive displays.In case of non-emissive display, what happens is that such displays convert light which may be natural or may come from other sources to graphics pattern on screen through some optical effects, this is important. Example is LCD or liquid crystal displays.Let us go into a bit more details of these types of displays. We will start with emissive display.As we mentioned, one example of emissive displays is plasma panel displays. Now, in such type of displays, we have 2 glass panels or plates placed parallelly as shown in this figure. And the region in between is filled with a mixture of gases, these are Xeon, Neon and Helium. So, this is the inside region between the 2 parallel plates, glass plates, which is filled with gases.Now, the inner walls of each plate contain set of parallel conductors. And these conductors are very thin and ribbon shaped. As shown here, these are set of parallel conductors, these are also sets of parallel conductors. The conductors are placed on the inner side of the plate.And as shown in this figure, one plate has set of vertical conductors, whereas the other contains a set of horizontal conductors. The region between each corresponding pair of conductors that means horizontal and vertical conductors is defined as a pixel. So, the region in between these parallel conductors is called a pixel as shown here.Now, the screen side wall of the pixel is coated with phosphors. For RGB or colour displays, we have 3 phosphors corresponding to RGB values.Now, what happens? The effect of image displayed on the screen happens due to ions that rush towards electrodes and collide with the phosphor coating. When they collide, they emit lights. And this light gives us like in case of CRT, the perception of the image. Now, the separation between pixels is achieved by the electric fields of the conductors. That is how plasma panels work.Then we have led or light emitting diodes, that is another type of emissive devices. In this case, each pixel position is represented by an LED or light emitting diode. So, the overall display is a grid of LEDs corresponding to the pixel grid. Now, this is different than plasma panel as you can see, where we did not have such grids, instead ions collide with phosphors and produce lights.Now, based on the frame buffer content, suitable voltage is applied to each diode in the grid to emit appropriate amount of light. Again, similar to CRT, where we use the frame buffer content to produce suitable amount of electron beam to produce suitable amount of intensity from the phosphors.Let us now try to understand non-emissive displays. An example is LCD or liquid crystal displays. So here, like plasma panel here we have 2 parallel glass plates, each having a material which is a light polarizer aligned perpendicular to the other. And rows of horizontal transparent conductors are placed on the inside surface of one plate having particle polarisers. Also, columns of vertical transparent conductors on the other plate having horizontal polarizer.Now, between the plates, we have a liquid crystal material. Now, this material refers to some special type of materials that have crystalline molecular arrangement, although they flow like liquids, they behave like liquids. Now, LCDs typically contain threadlike or nematic crystalline molecules, which tend to align along their long axes.The intersection points of each pair of mutually perpendicular conductors define the pixel positions. When a pixel position is active, molecules are aligned.Now, this LCD can be of 2 types, reflective and transmissive.In case of reflective display, we have external light enters through one polarizer and gets polarized. Then the molecular arrangement ensures that the polarized light gets twisted, so that it can pass through the opposite polarizer. And behind polarizer, a reflective surface reflects the light back to the viewer. So here, it depends on external light.In case of transmissive display, we have a light source present on the backside of the screen unlike reflective displays where there is no light source present. Now, light from the source gets polarized after passing through the polarizer, then twisted by liquid crystal molecules, and passes through screen-side polarizer to the viewer. Here, to deactivate a pixel, voltageapplied to intersecting pairs of conductors, which leads to molecules in the pixel region getting rearranged.Now, this arrangement prevents polarized light to get twisted and passed through the opposite polarizer effectively blocking the light. So, we do not get to see any colour or anything at those pixel locations. So, the basic idea in liquid crystal displays is that, we have a liquid crystal in between pixel positions. Due to the molecular arrangement, light passes through or gets blocked and we get the image on the screen accordingly.Another thing to note here is that these both reflective and transmissive LCDs are also known as Passive Matrix LCD technology.In contrast, we also have Active Matrix LCD technology, which is another method of constructing LCDs. In this case, thin film transistors or TFTs are placed at each pixel location to have more control on the voltage at those locations. So, they are more sophisticated. And these transistors also help prevent charges leaking out gradually to the liquid crystal cells. So, essentially in case of passive matrix, we do not have explicit control at the pixel locations, whereas in case of active matrix LCDs, we have transistors placed at those locations to have more control on the way light passes.Now, let us try to understand output devices, graphic output devices.So, as we said when we talk of output devices, one is display screen that is one, other thing is hardcopy devices, display screen we have already discussed. In hardcopy output devices, we have printers and plotters. In case of printers, there are broadly 2 types, impact printers and non-impact printers.Now, in case of impact printers, there are pre-formed character faces pressed against an inked ribbon on the paper. Example is line printer, where typefaces mounted on a band or chain or drums or wheels are used. And these typefaces are pressed against an ink ribbon on the paper. So, in case of line printer, whole line gets printed at a time.There is also character printer. In that case, 1 character at a time is printed, example is dot matrix printer, although they are no longer very widespread nowadays, but still in few cases they are still used. In such printers, the print head contained a rectangular array or matrix of protruding wire pins or dots. Number of pins determine the print quality. Higher number means better quality. Now, this matrix represents characters. Each pin can be retracted inwards.During printing, some pins are retracted, whereas the remaining pins press against the ribbon on paper, giving the impression of a particular character or pattern. So here, the objective is to control the pins or the dots, which pins to let impact on the ribbon and which pins to pull back inwards. Those are impact printers. More popular nowadays are non-impact printers. We are all familiar with them. It has laser printers, inkjet printers, electrostatic methods and electrothermal printing methods.In case of laser printer, what happens? A laser beam is applied on a rotating drum. Now, the drum is coated with photo-electric material such as selenium. Consequently, a charge distribution is created on the drum due to the application of the laser beam. The toner is then applied to the drum, which gets transferred to the paper. So due to the charge distribution, that toner creates a pattern, pattern of what we wanted to print, and that gets transferred to the paper.That was laser printing technology. In case of inkjet printers, what happens? An electrically charged ink stream is sprayed in horizontal rows across a paper, which is wrapped around a drum. Now, using these electrical fields that deflect the charged ink stream, so there are electrical fields also, which deflect the charged ink stream, dot matrix patterns of ink is created on the paper. So essentially, there is ink stream which gets deflected due to electrical field and then creates the desired pattern on the paper, which is wrapped around a drum.Then we have electrostatic printer. In this case, a negative charge is placed on paper at selected dot positions one row at a time. Now, the paper is then exposed to positively charged toner, which gets attracted to the negatively charged areas, producing the desired output.And finally, we have electrothermal methods of printing also. In this case, heat is applied to a dot matrix print head on selected pins, and print head is used to put patterns on a heat sensitive paper. Of course, these 2 types are not as common as the laser jet and inkjet printers, but they are still used. That is about how printers work.So far, we have not mentioned anything about colour printing. We will quickly try to understand how colour printing works. So, in case of impact printers, they use different coloured ribbons to produce coloured printing. But the range of colour and quality is usually limited, which is much better in case of non-impact printers.Here, colour is produced by combining 3 colour pigments, cyan, magenta, and yellow. In case of laser and electrostatic devices, these 3 pigments are deposited on separate passes. In case of inkjet printers, these colours are sought together in a single pass along each line. So, they work differently for different printers.Apart from printers, we also have plotters as another graphics output device. They are hardcopy outputs. And typically, they are used to generate drafting layouts and other drawings.This shows one example plotter, this figure. Here typically, in pen plotters, one or more pens are mounted on a carriage or crossbar, which spans a sheet of paper. And this paper can lie flat or rolled onto a drum or belt, which is held in place with clamps.It can also be held in place with a vacuum or an electrostatic charge. As shown here, there is a pen, a pen carriage, moving arm and there are other spare pens also, indicating different colours. So, the pen can move along the arm, and the arm can move across the page.To generate shading or styles, different pens can be used with varying colours and widths as shown here.And as I already mentioned, the pen holding carriage can move, it can be stationary also depending on the nature of the plotter.Sometimes instead of pen, ink-jet technology is also used, that means instead of pen, ink sprays will be used to create the drying.And how this movement is controlled? Again, it depends on the content of the frame buffer. So, depending on the frame buffer values, the movement of pens or spray, ink spray is determined, just like in case of video monitors. So, we have learned about in brief 2 types of graphics output devices, namely video monitors and hardcopy outputs. Let us now try to quickly understand the input devices, what kind of inputs are there and how they affect the frame buffer.In most of the graphic systems that we typically see nowadays, provide data input facilities, that means the users can manipulate screen images. Now, these facilities are provided in terms of input devices. The most well-known such input devices are keyboards and mouse. But there are many other devices and methods available. Let us have a quick look at all those different devices and methods.So, in case of modern-day computing environment, as we know, we are surrounded by various computing devices. So, we have a laptop, desktop, tab, smartphone, smart TV, microwave, washing machine, pedometer and many more such devices that we interact withevery day, each of which can be termed a computer, by the classical definition of a computer. And accordingly, we make use of various input devices to provide input to these computers.So, all these input devices or input methods can be divided into broad categories. The objective of such devices are to provide means for natural interaction. So, they include speech-based interaction, that means the computers are equipped with speech recognition and synthesis facilities.So in case of speech recognition, the computer can understand what we say, so we provide input through our voice and there is a speech recognition system that understands what we say. And then it can also produce output in terms of speech only, human understandable speech through the synthesis method.Note that this is different from what input and output we have mentioned earlier. Then we have eye gaze interaction, where we use our eye gaze to provide input. Haptic or touch interaction, one example is the touchscreen, which we are heavily using nowadays because of the use of smartphones or tabs.There are alternative output mechanisms also, exploiting the sensation of touch. These are called tactile interfaces. Here, we do not rely on display or visual display, instead we go for tactile interfaces. These are primarily useful for people who are having problem in seeing things.We can also have “in air” gestures to provide input. Now, these gestures can be provided using any of our body parts like hands or fingers or even head. And there is no need to touch any surface unlike in case of smartphones or touchscreen devices, where we provide gesture by touching the surface.We can also have interaction or we can also provide input through our head or body movements. So, all these are input mechanisms that are heavily used nowadays. Traditional input mechanisms like keyboard, mouse, joystick, stylus are no longer very popular, instead we mostly interact with the computers that we see around us through touch, through gestures, through speech and so on. So, all these devices also are equipped with recognition of such input mechanisms.And also, as I said, output need not be always visible, sometimes it can be different also like in case of tactile output, we can only perceive the output through touch rather than by seeing anything. There also, this frame buffer content can be utilized to create the particular sensation of touch to give us specific output. Also, we can provide output through speech, speech synthesis to be more precise and so on.Now, these inputs can be used to alter the frame buffer content. For example, I have created an image of a cube and a ball as the example we started with. Now, I gave a voice command that place the ball on the left side of the cube, that means the computer will understand this command and accordingly modify the frame buffer values, so that the ball is now placed on the left side of the cube.Similarly, I can also give a command like, place the ball on the right side of the cube. And again, the frame buffer value will change, so that the display that we get is an image showing the ball on the right side of the cube and so on. So, with these inputs, we can change the output. So, that is in brief, how we can provide input and how it affects the frame buffer content to get different outputs.Whatever I have discussed today can be found in this book, you may refer to chapter 10, these 2 sections, section 10.1 and 10.2. So today, we briefly discussed about different technologies that are used for computer graphics systems, namely the display technologies, the hard copy output technologies and the input technologies.In the next lecture, we are going to go deeper into this graphics hardware and going to learn more on how the controller works, how the GPUs in the controller are organized and help implement the pipeline stages. See you in the next lecture. Thank you, and goodbye.