Loading

Alison's New App is now available on iOS and Android! Download Now

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Video 1
we are going to have about the basis of remote sensing image interpretation or image representation that we will be discussing in this one that how an image is created our represented, which we see on the screen or maybe on the printout, which is very, very important. Because ultimately after all going through the electromagnetic radiation and different wavelengths, different sensors and governingdifferent laws. Ultimately what we get through remote sensing satellites and image and how that image is represented in 2-dimensional space, and that is equally very important. So, this is what we are going to discuss. And that image basically a digital image which I am talking is a 2-dimensional representation of objects, which are present in the real world. And this representation is we see as remote sensing images, it is a part of the earth surface as seen from space. Because of the distance and the depending on the resolution, we do not see the entire earth in one go. Anyway, by whatever design we are having the maximum we can see thehalf of the earth and but in higher spatial resolution, data or relatively in coarse or spatial resolution data. The parts of the earth or part of the earth is only seen here nor the entire thing which is seen. So, the image and generally when it is transmitted by the satellite is it digital, but image can be also analog after either displaying on the screen or maybe in the print form, so that images can be analog or digital. But in this course, most of the time we are going to discuss only about the digital images and aerial photographs earlier, when we did not have the current remote sensing technology, then aerial photographs are examples of analog images. Though they are photographs, they were snapshots but in case of images, these are a scan line by line and recorded. So, for example, if you take a picture of somebody using your digital camerathen in one click entire scene is grabbed. This is what we see a snapshot. But suppose if you put a document on a scanner or on a photocopy machine then it when a scanner moves from see leftand right end and it scans the image. The same way you would satellites sensors also work. So, as satellite moves, the sensor movesand it scans the part of the earth and create an image. So, that is major difference between a photograph and image. Photograph is in a snapshot, where an image is created line by line. So satellite images acquired using electronic sensors, which are sensitive to electromagnetic waves and they are all in the digital form originally. Here I am taking one example and a zoomed part and also digital numbers or we call them pixel number. So, pixel here is a abbreviation which is stands for picture element. So, here this is ablack and white panchromatic image what we see here and though you are seeing a lines here and in each line there are cells or pixels are here, but generally in the image you will not see these lines except the pixels are aligned like this in a 2 dimensional matrix.Same way these lines are shown here just for our better understanding but really in the system or in the memory of computer, these lines are not there at all, just for our understanding we are using these lines. Now, since this is gray, so the various sets of gray are being used to represent a part of the earth shown here. correspondingly the digital numbers are also there. So, what you can notice that digital number 0 are having completely corresponding black color. And whereas digital number like 255 is having white color and rest of our white set of grey and rest of the pixels are having the shades between 2 extreme shades or extreme color, one is the black, another one is the white and rest are in between. So, this is an example of 8 bit image and here is an example of a black and white image. So, a digital image now we can define that a digital image is a 2 dimensional matrix or area of pixels. Important thing here, also one important thing which you can notice that the shape of this cell or pixel is a square in shape. However, overall shape of an image need not to be square, it can bealso rectangular like this also, but still by definition, it would be a 2 dimensional matrix, what Imean is the number of rows and number of columns need to be # saved, they can be different, and if they are different, then overall shape of an image is going to be rectangular. But if number of rows and columns are same, then the overall image is going to be square u say however unit of an image is always is square in shape as we are seeing here, and throughout that image the same size of those squares are there. These squares are representing an average aerialof the ground. So, if suppose this one square equal to 10 meter by 10 meter. So 10 meter of 10 meter ground area is being represented by a single pixel value. Now, this if I say this is visible channel that means, whatever the reflection within 10 meter by 10 meter area on the ground and areal average has been taken and sensor has recorded that value. So the sensor itself does that part that it takes an average reflection if it is visible channel, average reflection and recorded in the form of a square or numbers but we see numbers but when we make an image or when image is made, then the unit always in a square. So, I will repeat this part. That overall shape of an image can be either square or rectangular that means number of rows and columns need not to be seen, if they are different then the overall shape of an image is going to be rectangular, but number of rows and columns are same, then overall shape of an image will be square. However, the unit of an image which is the pixel willalways be squared, this is how we represent all satellite images or all digital photographs also in modern days, all represent a square area of the ground or scene. So, this is there and these lines which are there to just for our understanding, these lines in data itself on an image, these lines do not exist, there will be a continuous you know matrix areas of pixels. So, each pixel has an intensity value, it depends on if I am seeing a thermal infrared image, then that is the emitance value. If I am seeing a visible channel, then it is a reflectionvalue. And the intensity word has come here, which decides if it is a 8 bit channel and intensity is very poor or saying reflection is very poor, then I may get a value near 0 or maybe 0, but if thatreflection is very high, maybe like Francis low cover areas, then my reflection value or the pixel value can be 255. So intensity that this is what the intensity means here, that each pixel has an intensity value. And, in case of emitance, the same thing that if high emitance is there, then I may get in a black and white thermal infrared, I may get even 255 value in 8 bit scenario. So, each pixel has an intensity value represented by a digital pixel number, digital number or pixel number and sometimes be also called DN in short, any location address that is referenced by rows and columns. So, location address that if I have to address this 0 value in this image, then this is rownumber 1 and column number 6. If I have to address this 255 here, then it is this is row number 5 and column number 6. So, this ishow that is why it is a reference wise rows and column numbers. So, each pixel is having its location and image which is referenced and each pixel is representing an intensity value and pixel is an unit of an image. Now, that whenever we use this word unit, that also means that it is indivisible. That means now I cannot divide. Some people claim that they can see inside a pixel or they call a sub pixel, but it is not really possible. So, what I say here, that once an image has been acquired by a satellite or a sensor, then the spatial resolution of that image cannot be changed. So, once an image is acquired, the spatial resolution is frozen. This is a very appropriate but big statement. Once an image is acquired, the spatial resolution of an image is frozen. That means, if I take say example of this particular image which is being displayed if the pixel is representing 10 by 10 that means 100 square meter area. Now, once I have got the image can Ichange to a 1 meter 1 meter spatial resolution or cell size or pixel size do on computers I canm change, but it will not improve the image quality, no way it is going to improve, no matter howinterpolation techniques or neural network or fuzzy logic whatever I apply. That is not going to improve the image quality, if they would have been possible, if that would have been possible then there are sensors like NOAA AVHRR which provides data at 1kilometer resolution that means 1000 meter resolution. So, we acquire an image at 1000 meter resolution and by using computers and these techniques of interpolation or some other techniques, we can create 1 meter resolution which is not possible. So, I repeat that sentence that once an image is acquired spatial resolution cannot be changed,that means the though by some image processing techniques, you can improve the quality of enemies by enhancing the image, but I am talking about the spatial resolution that means, once it is taken then 10 by 10 pixel, 1 pixel value is registered by the sensor. 
Video 2
Now, we continue on this pixel because pixel is unit of an image and in remote sensing pixel matters lot. So, that is why we continue this discussion that the pixel is an unit of an image which we have just mentioned. And therefore, it is indivisibly and the digital image comprises of 2 dimensional area or 2 dimensional matrix of individual picture elements or in short pixel arranged in columns and rows, which we are seeing here. Here the this one the yellow colored one is what 1 single pixel is there, again lines which are just for our understanding in the real image lines will not be there, these lines vertical and horizontal. So, each pixel represents an area on the earth surface if these are the satellite images, then this is true that each pixel represents an area of the earth surface and the pixel value will represent the intensity at a particular wavelength.So, if we are talking visible then the reflection if it be a talking thermal infrared, then emitance. So that represents the intensity, so pixel has an intensity value, and a location address that is number of rows and columns in a 2 dimensional image, and therefore no further details can bevisualized inside the pixel because it is a unit and the unit is indivisible, and I cannot see inside a pixel. That is one has to really remember. So, an image is a 2 dimensional matrix. And therefore, can have only 2 shapes either square or rectangular. That means number of rows and columns are same then square, same image, and if rows and columns are having different than rectangular shape, and also important statement, that pixel of an image is always square in shape. So, unit is always a square. So intensity value represents the major physical quantity, maybe the reflection or solar radiance in a given wavelength band reflected from the ground.So maybe reflected when we are talking about visible or infrared, maybe emitted when we are talking about the thermal infrared or backscattered radar intensity if we are talking about microwave or passive microwave part of EM spectrum. So, we are having satellite sensors in the reflected part, we are having satellite sensor in thermal infrared and of course, we are also havingin radar wavelength. So, all kinds of sensors are available what they are regarding intensity of apixel. Now the pixel value is normally an average value or I say area average value for the whole ground area covered by pixel and that has to be a single value and this value also has to be integer number This one has to remember, in digital images, satellite images I am talking particularly the cell value or pixel value is always an integer value that is whole number we cannot have in decimals. Satellite images I am talking though in a normal matrix 2 dimensional metrics that can also be a real number which is true in case of digital elevation models. But in a digital satellite image, the pixel value will always be an integer or a whole number. So intensity of a pixel is digitized by the sensor on board or satellites recorded as a digital number or pixel value also we called DN or pixel value. And due to the finite storage capacity, we do not have the finite storage. So every sensor on board of satellites will have a finite storage, a digital number is stored with a finite number of bits and in a binary whether the recording is being done at 8 bits, maybe 10 bits, 11 bits. Forexample, normally most of these sensors are recording data at 8 bits but sometimes they also go lower like a IRS-1 C 1 D pan, panchromatic band, we call as pan PN that used to record at 6 bits.Whereas normally images are generally recorded in 8 bits that means the pixel values can vary between 0 to 255. But if it is a 6 bits, then pixel values can vary between 0 to 63, total number 64 and NOAA AVHRR that records in 11 bits. So, the pixel values can vary in a very long range. So, this is a by decided when the sensors are designed and they are recording. So, the number of bits to determine radiometric resolution is important here. The radio metric resolution of an image higher this number, higher number of bits, higher the radiometric resolution, because finer details or slight change in reflection or emitance can be recorded if we are recording in 11 bit instead of 8 bit compared to 8 bit. So if we go for 6 bits, 6bits images may not record slight change in the reflection on emitance. But for the same area, if we record in 11 bits, definitely even minute changes in reflection or emitance can be recorded at.So this provides a better radiometric resolution. So what higher the bit number, higher the radiometric resolution, lower the bit number, lower the spatial resolution. So number of bits determine the radiometric resolution of an image. For example, an 8 bit digital number ranges from 0 to 255 and 0 is also counted. That is why 2 power 8 -1 total number of 256. So if I display in black and white or in gray scales, then geo will be my one extreme color that is black.And 255 would be my white color and rest of the values will be in between 0 and 255. While an 11 bit number ranges from 0 to 2047, the same will apply -1 but because 0 is counted, so, 0 is also a value here. So, that means 2048 total number of variations can happen in an 11 bit image. Whereas in case of 8 bit image, total number of variations can be 225 256, not 255, 256. And if it is, 6 bits image, then total number of variations can be 64 between 0 to 63, 0 is counted. So, this detected intensity value that is the pixel value needs to be scaled and quantize to fit within that ranges of value. So, this quantization is what is radiometric resolution and in a sensor, this is radiometrically calibrated image and the actual intensity value can be derived from the pixel digital number and this is what it is done when we go for quantitative remote sensing, this is what it is done. So, the address of a pixel location is denoted by its rows and column numbers in 2 dimensional image. But when we go for georeferencing that means bringing geographic coordinates andimage then we can also address with geographic coordinates. But as long as the image is not geo reference, but then generally we refer in terms of rows and columns a location of a pixel is decided based on rows and columns. And there is a one to one correspondence between column row address of a pixel and the geographical coordinates and that is latitude, longitude of a image location and in order to be useful exact geographic location of each pixel on the ground must be derived from its row and column this is and this is what it is done in the geo referencing. So, given the image geometryand that satellite orbital parameters. A level 1 kind of georeferencing can also be done using orbital parameters of a sensor of a satellite using that a is I can say curve level of geo referencing can be done but if we want tohave a more accurate georeferencing or geographic coordinates to address individual pixels, then a very accurate georeferencing is required. So, it will depend on the spatial resolution of an image and our requirements also, but techniques are available. That means the georeferencing techniques are available to transfer our images fromgeometric domain to geographic domain.
Video 3
 Now, when we go for multi layer images, so far what we have been discussing is the single image and no other not coloured images. But generally we also use coloured images. And by you know that when coloured images are created, we need at least 3 primary colours and respective bands or channels of a sensor to create a color image. So, several types of measurements may be made from the ground area covered by a single pixel. That means several types here means in different wavelength of the same area and each type of measurement forms an image which carries some specific information about the area. Very soon we will be seeing an example of real major also real color damage. So, by stacking these different bands or these images from the same area together taken at the same time, by a sensor, by multispectral scanner or sensor, multi layer images can be formed or coloured images as I should say, and each component image is a layer is a multi layer image. So, here multi layer images can also be formed by combining images obtain from different sensors and other subsidiary data, that is a different thing. That means, you can have a of the same area and image by the same sensor have 2 different dates or you have you can have same area covered by 2 different sensors that to you can combine. So, because these are digital metrics, 2 dimensional metrics all kinds of operations are possible. For example, a multi layer image can consist of 3 layers, one from a sport multispectral image, maybe another one for ER synthetic aperture radar, that is radar image here is the visible channel and perhaps a layer consisting of digital elevation map a DM this is not directly we cannot see as an image that is a grid or a another type of roster. So, that is also possible. So, multispectral image consists of a few image layers, each layer represents an image acquired at a particular wavelength band. So, there are 2 things now, one is multi layered, multi layer images, I have told you that it can for different individual layers we can have individual data or images from different sensors or maybe from same sensors but have different data, but when we go for multispectral images. Then what we are talking the sensor is it sensory same, wavelengths are different and then we create the color composites. So, for example is SPOT HRV sensor SPOT is a satellite HRV is a sensor operating in a multispectral mode detects radius in 3 wavelengths and which are the green band of visible the red and the near infrared. All 3 we can use and create a lot of composite also. So a single SPOT multispectral scene consist of 3 intensity images in 3 wavelength bands. And in this case, each pixel of the scene, or an image has 3 intensity values corresponding to 3 bands, and we assign them different colours. These RGB out of these 3 colours, 2 different bands also. Whereas a multispectral icon image consists of 4 bands blue, green, red and near infrared while a Landsat multispectral image consist of 7 bands. But we can use at a time only 3 to create a color composite. And the more recent satellite sensors are capable of acquiring images at many more wavelength band but it mean when we started in 1972, the first Landsat-1 and it had the sensor MSL, it has only 4 channels. Now later series oflandsat Landsat-8 OLI series it is having now 8 channels. So, things have changed there for example, MODIS sensors on board the NASA data satellite consist of 36 spectral bands.And whereas NOAA AVHRR also provides for thermal channels provides the spatial of resolution 1 kilometer is having just 5 channels. So, it depends on how where the channels are located, how narrow they are and of course sensor to sensor. And so, this we are talking about super spectral images when we are having choices of many channels rather than just 4 as in the beginning. So, bands have narrower bandwidth in super channels are super respectful.And then finally spectral characteristics of target to be captured by the sensor. So, we started with multi layered images, we started multispectral images, we have now super respectful images and of course, then hyperspectral. This is the latest in the series. Hyperspectral images for even for the same bandwidth, we might be having 100 of channels. As hre it is shown that if I plot wavelength versus say intensity value, not pixel value for a vegetated area, shown here in this schematic, then I will get this kind of curve. So, hyperspectral image data usually consists of over 100 contiguous spectral events, somewhere some banks 256 channels in a high spectrum hyperspectral image forming a 3 dimensional or 2 spatialdimensions and 1 spatial dimension image cube very thin bands.In case of if I give example of Landsat MSS 0.4 to 0.5 that means the bandwidth of a channel is0.1 micrometer and whereas in case of hyperspectral the width of a channel can be 0.5 nanometer. So, that kind of bandwidth we are talking very, very narrow bands and continuous spectral coverage, no gap in between. So, each pixel is associated with the complete spectrum of image area as I have just shown that 1 pixel. And if you go for the same location pixel in all bands, say 256 bands, then you can exactly get the curve and for the these because these hyperspectral images are very good to create these spectral curve response curves for different minerals, rocks and different kind of object different kinds of vegetation, different kinds of water, bodies conditions and so on so forth, because continuously you are having 1 band or another. But in case of like MSS, Landsat, ETM, OLI series all they will have gaps in between. So, that is the biggest advantage with hyperspectral images. And the hyperspectral images generally are of relatively high resolution but not very high resolution. So, highest spectral resolution of hyperspectral images enables better identification of land covers, but 1 band may have almost thesimilar characteristics of the next band. But if you leave some gap, like here, then you get suddenly a higher affliction, they are also. Hyperspectral images, what they are recording the precise spectral information which contain any hyperspectral image which allows us for better characterization and identification of targets and hyperspectral images have potential in those fields which we require very precision with the images. So, that we can use to monitor like in case of precision agriculture, we can monitor the type health and moisture status and maturity of crops. Because, you know that crops changes there when they are growing and then they riping and atdifferent moisture status and the reflection in infrared changes vary from successively. So, if we are hyperspectral images of that area then we can see that of course the types of vegetation, health of the vegetation, the moisture status and of course the majority of crops.It is only possible when you are having continuous and narrow bands and narrow bands I am talking of having nanometre thickness rather than micrometres thickness. So that is very important for many applications also in mineral exploration, in oil exploration, in water quality assessment, in many areas. hyperspectral images are very, very useful. Also in the coastal management how, for example, monitoring of phytoplankton, pollutants or pollution and bathymetry changes also, it is possible to detect easily with hyperspectral images. The problem with the hyperspectral images is too much data to handle and therefore people then from 256 channels, they will reduce to just few channels by implying mathematics or some othertechniques. So, that is the one big problem with hyperspectral images. Second, and that may be we may come in future, but currently the problem is there that a very narrow strip is recorded,maybe a 5 kilometer wide by airborne sensors. And that kind of thing, because you need to have u say trade off, if you go for very high resolution images, spatial resolution images, then the swath or the strip which is covered of the surface of the earth becomes very narrow. And when you go for relatively coarser resolution images, you can cover a large part of the earth. For example, if I go for NOAA AVHRR example, then it has got a spatial resolution of 1 kilometer and it covers about it 2800 kilometers swath.So, in one image that is covering about 2800 kilometer wide area of the earth. Whereas, if I have an NOAA AVHRR having 1 kilometer spatial resolution and I go from 1000 meters spatial resolutions to 1 meter like ikono, then the swath or the strip of the earth which is covered by ikono sensor is just off 11 kilometer wide. So, though you have improved on spatial resolution 1000 to 1 meter. But the same time you have reduced the coverage instead of 2800 kilometer, you have reduced to 11 kilometer. So, this hyperspectral images also are having compromise in terms of the coverage or footprint or paths or swath width. So, the swath width generally is very small when hyperspectral images. So, if your area of study is quite large, then you need to have several horbit images and to study that entire area. But if it is a spatial resolution is a little lower which is generally it is around 5 or 6 meters in case of hyperspectral is still the swath is very narrow. So, currently hyperspectral imagery or images is not commercially available from the satellites. This is the current history, most of the hyperspectral research or work or examples which you see is all airborne hyperspectral images and there are experimental satellite sensors acquire hyperspectral images, but they go onboard of different satellites like Hyperion sense onboard ofEO-1 satellite and CHRIS sensor onboard of ESA PRABO satellite. But data may not beavailable very easily. Now, this covers sort of in summary almost what we have discussed, that this is the example of monochrome image. This is the example of multispectral image or a coloured image RGB. RGB stands for red green and blue combination image. This is example of spectroscopy and that means ground based spectrometer using ground based spectrometer and you can have the spectral response curve. You go for multispectral images that is very common remote sensing even today also you go for hyperspectral as you can see here the point which I was mentioning earlier, that between one band and another here, you are seeing some gap that means one band in a one sensor, another band, maybe there may be some gap, but here you do not see any gap, say continuous spectral coverage by hyperspectral images. Now, when we want to study a certain part, say here the example is here, maybe water body, then this is how, when we go in different channels in a blue part, we see this kind of reflection in a green part, we see this kind of reflection and in red part we are seeing this kind of with this information is coming from different channels in order to have a curve from that. Whereas in case of non continuous spectral coverage like multispectral coverage which you are seeing here.Then you will have a response something like that for blue, then there is a gap for green, then there is a gap between green and red, not a continuous like this here. So, hyperspectral definitely having advantage from a spectrum coverage point of view, but from ground coverage point of view still and there is a limitation, but I hope that in future and these limitations will not be there and be maybe having very, you know, very large images covering a wide swath of the earth in a continuous spectrum bands. So, this brings to the end of this discussion. Thank you very much.