Loading
Apuntes
Study Reminders
Support
Text Version

Image Characteristics and Resolution

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Video 1
we are going to discuss image characteristics especially we are talking about digital images which we will be using or remote sensing or satellite-based images and different resolutions in remote sensing. In earlier discussion sometimes we have touched resolutions, but not all and not as per you know in detail or in definitions. So, today we will be looking at definitions of different resolutions, there are 4 in remote sensing. So, all this we will be discussing. So, let us start as you know that is a very famous phrase that if picture tells 1000 words. And I have added one more line in this one and that means the satellite image tells 10,000 words, because as in the earlier part of this course, we have discussed that remote sensing images are digital and spatial and generic and generic here that means that they can these images can beused for various purposes. Some people might be using for weather forecasting, some people might be using to cover vegetation or vegetation index. Maybe some people using for snow cover mapping, some people are using for civil engineering, geologist might be using to find water resources or a while or some mineral resources. So, different people use is different, there are various application a single image can be used for 10 times or more than 10 types of applications. So, that is why a picture tells 1000 words whereas, a satellite image tells 10,000 words. We will be learning also during this course, that how we can exploit the capabilities of satellite images for our own benefits and applications. So, what exactly an image is. So, an image is a basically a pictorial representation of an object or a scene and there are various forms of images as you know the analog images is a very common form used to be. Now we are also having digital form, which is what we get from satellite images. Nowadays also digital cameras, maybe SLR camera or maybe a camera inside a mobile. So, these are also digital images, but they are informally snapshots, so that we will also discuss thedifference between an image taken by a satellite sensor and an image taken by your mobile or digital camera. The analog image is basically produced by photographic sensors on paper based media or a transparent media. So, that way we call it analog images. Earlier when we did not have these digital cameras we used to have you know exposures of a scene on a film and usually it used to be negative and then a positive print used to be printed out of that negative. So, those are the analog images, aerial photographs also which are in many countries these are used because lot of security issues in India now we do not require latest aerial photographs. But all aerial photographs can be formed of India, they are all are analog images that they have been taken by a photographic sensor or a camera using a film and that is how these analog images are generated. There are variations in scene characteristics which are represented as variations in basically brightness and generally the aerial photograph which we used to use there in the black and white or in shades of grey. So, that is also a very common thing but later on also coloured aerial photographs were also available, but all those have been taken by photographic camera on the film itself. And the basis basically here is the same as in case of satellite images, that objects are reflecting more energy appear brighter on the image or on a photograph, and objects reflecting less energy appear darker and objects which do not reflect any energy they appear almost black on a grey image or grey photograph. What exactly digital photograph which is produced by electro optical sensors. So, whether it is a onboard of a satellite, or handheld, maybe mobile or a digital camera, these are produced by electro optical sensors also in later part or in the next discussion we will be looking how exactly these images are created by onboard satellites sensors. So, that part we will be also discussing. Now, if we go more detail about this that a digital image produced by electro optical sensors, which is composed of tiny equal sized square picture elements also we call as pixels which are arranged in a 2 dimensional matrix. This is what in the 2 dimensional array is used in a typical a digital camera and each pixel is associated with a known number as a digital number or DN or pixel value or brightness value or when we talk in terms of gray scales then gray value, which is a record of variation in radiant energy in discrete form. So, what these are opto electro sensors are doing optical sensors are doing they are basically recording this brightness or reflection which is coming or maybe the radiant energy maybe the thermal one also which is recorded by the sensors. And again the same thing like in case of a photograph analog 1 that an object reflecting more energy records the higher number for itself on the digital image and vice versa is also true. That means an object which is reflecting very less energy will be recorded with the last value. So, if I talk in terms of say if I take an example of 8 bit image and that too, you know just 8 bitimage and the values are might vary between 0 to 255. So, the objects which are reflecting no energy or almost very little energy may have the pixel value near 0 and for example, maybe an infrared channel a water body, a pure water body be absorbed completely absorbed the infrared energy. And therefore, there will not be any reflection and therefore, the water bodies may appear completely dark or pixel values of these water bodies might be near 0 or maybe even 0, the same day with the pixel which are recording high picture pixel value might be coming from like fresh snow surface which will have a very high albedo and might be recording pixel value near 255. So, in that way this is how these digital images are created. Of course, when we combined 3 images then we achieve the color which we will be discussing also later about that. So, an image is basically a raster data which is a 2 dimensional matrix in mathematical domain and the unit of this matrix is a pixel and which is square in shape, however overall shape of a raster data or a 2 dimensional matrix can be a square or rectangular but the unit will always be having a square in shape that is in our case in digital image that is the unit is pixel and this I have already mentioned that the overall image can be either a square or rectangular in shape. But unit has to be always in square, this is the limitations or you can see the advantages with raster that each unit will have the same size and same shape and the shape is square in shape. When example is here that if we blow this the this grey image here, what we see they say if we blow this image then what this is how we see except these black lines which are showing individual pixels, that grid is not really there they just used to demonstrate this thing. So, basically this is an example of 8 bit image and therefore you can see there are values which are having pixel values which are 0. And that means, no reflection is coming at all if it is a visible or infrared and there are some pixel values are also showing 255. So, rest are in between so this is a typical example of an 8 bit a digital image. Now, when we discuss all this we also use some terms which I will explain here. So, we say number of rows and number of columns, these are the rows which runs in horizontal direction. And these are the columns which run in the vertical direction. Generally, when image in geometric domain the origin is always considered from top left, but when image having the geographic coordinates, which we will be discussing under a new topic, which is georeferencing or satellite images, then once an image is georeference that means it is now having geographic coordinates, then the origin would be shifted to bottom left.So, normally when image is acquired via satellite, the origin is always referred from top left. Like for example, if I have to refer this pixel then I would say column second row 2 also, likewise, I will address that one, but the ones I am having geographic coordinates then origin would be from the bottom left. Now the pixel value, pixel value which one is the pixel that is the location then next is the value. So, the pixel value that is the magnitude of electromagnetic energy captured in a digital image which is represented by positive number, digital number and these are positive integer values. You can see also in this example, that all values are positive integer values, there are no decimals, there are no floating points of real numbers, and there are no positive or negative signs. So, when the no signs we consider as a positive and these are all integer values. So, whole number, so all images which you see whether taken by a satellite sensor or a by digital camera, or maybe your mobile camera if you start seeing using some software, the pixel values you would find that they are always positive integer value whole number. So, these numbers are in form of binary digits or bit say and may vary between 0 and 0 and a selected power of 2. So, if it is 8 bit then 2 power 8 and if it is a 7 bit then 2 power 7 likewise these are captured by the different sensors.  Video 2 Each bit records an exponent of power 2, for example, 1 bit is 2 power 1 = 2, that means binary image and the well pixel values can vary between 0 and 1 total variations would be 2, similarly maximum number of brightness level 7 level depends on the number of bits used in representing energy recorded. If I take example of 8 bit or a you know a bite image 8 bits image that means 2 power 8 that would we will be having total number of variations or total number of values we can have and varied values that is total 256 between 0 and 255. So 0 is also counted here that is why the total is 256 and this is the 8 bit image is the most common image. And also most of the software's about digital image processing or photo editing software are very much capable of handling 8 bit images very easily. If you combined 3 banks create a coloured image then it becomes 24 bit, still these software's can handle. So, this 8 bit combination is the best suited for. And many satellite sensors also acquire these images. So, this is a sort of summary that as I mentioned that if it is 1 bit image that is binary image, total variations you can have 2 and the color levels either it can be pixel value can be either 0 or 1. If I take example of 7 bits then 2 power 7, total number of variations is 128 values between vary between 0 to 1to 7. Similarly if I go for 16 bit image 2 power 16 says this is 65536. So, lot of variations would be available, but always one has to keep in mind, there is the storage requirements higher the bit of an image that means the depth of a pixel can have more space it would require and this is sort of the space requirement is exponential. So, that you can imagine that if I take it 24 bits image, then it is 1.67 million colors or variations of pixels I can have because now 3 bands of 8 bit each have been joined here. So then a space requirements would be more than 3 times. Let me take some example of, ofcourse, these are you know, general not basically acquired by the satellite as it is. But these have been deteriorated just to demonstrate to you. So it is an original image is 8 bit image then it has been deteriorated to 4 bits further to 2 bits and then binary and as you can see, that in binary image in the left side example, that you are having only 2 values either black or white, no other values are there. Whereas in case of 2 bit image, as you can see here, that you have maximal variations you can have 4 and therefore few gray values are also possible. This is what you are seeing, so you are seeing black very dark gray, light gray and white. So, the variations are like this only, when we go for 8 bits of course, total number of variations in image can have in terms of pixel values 256 and therefore, the appearance of that image compared to 1 bit, 2 bit and 4 bit is much better. So, higher the bit is higher the variations and image can have even if it is black and white, it will appear much better. But here keep in mind that there is no change in that spatial resolution, spatial resolution remains same in all 4 examples only this quantization or radiometric resolution which we will be of course discussing later also has changed. So, 8 bit image just for demonstration has been deteriorated to 4 bits, 2 bits and 1 bit and you know that what happens to the quality. So, these are the advantages, but 8 bit is most common one. Then we go for coloured images. Then you use this color space and a additive color scheme there are 3 primary colors blue, green and red BGR or RGB whatever the way u would like here is the example, that visible part of EM spectrum 1 image of green part, then red part and near infrared part by near infrared part because this image target here is creating a false color composite. So, I can discriminate vegetation much see. And as you know that when we were a spectral curve responses, we saw that vegetation having height reflectance in the infrared part of the EM spectrum. So, this channel has also been used an assigned red color, there is a red channel of visible part of EM spectrum has been assigned green color and the first one that green has been assigned blue color. Now, when we combine using this additive color scheme, then what we get a false color composite why false color. This I also earlier mentioned but today also I will explain to you that false colour because vegetation is not appearing as green. This is what normally we expect, but why it is not appearing green it is just explained to you that infrared channel which is having high reflectance in infrared part of EM spectrum has been assigned red color. Now, there can be a question that why we are assigning red color instead red channels should have been assigned red color and green channels would have been assigned.Then we will end up with the bleu, and one can create that combination, but there are some standards have been made, when we started using remote sensing since 1972. When we say standard false color composite, that always means that infrared channel has been assigned red color. And therefore we use this kind of combination to create false color composite. And everyone understand once I say false color composite that means infrared channel has been assigned red color. The purpose here is to discriminate objects to depict different objects identify different objects very easily by using this color combination. But this is each image here is 8 bit. Now when we combine then this false color composite would be 24 beats that is 1.67 million colors. It can have variations in colors it can have because now 3 colors have been combined. So, this one has to remember. Now, as you know that normally a single channel data is only using panchromatic camera, which many satellites are having like a we started our own Indian panchromatic camera we started with the IRS-1A 1 B and IRS-1C and 1D and so on we started and then Cartosat and Resourcesat, all these are having panchromatic camera not in one IRS-1A 1B, same with the pan camera also started with this part also.So let but the same time we also had multispectral images. So pan cameras we are also used and same time multispectral sensors that means data is being gathered for different part of the spectrum, here is the example band 1, band 2, band 3 and when 4 are representing different part of EM spectrum and this is an example of our own IRS-1B this 2 was the sensor IRS-1B was the name of the satellite. And this is part of you know, foot hills of Himalaya or part of Himalayas also there, the PAONTA side. So, what we see the band 1 is 0.45 to 0.52 and similarly, next band is start from 52 to 59 then 62 there is some gap and then this. So, we can use these channels as you can see that this infrared the band 4 is showing altogether different characteristics of vegetation, whereas in these 1 2 3 the vegetation is appearing. So if I combined by assigning red color to band 4 and then blue and green to either these 3 bands, then I get a false color composite image. So, this is how a false color composite image can be made by choosing 3 bands, which are having maximum variation present within them. Of course infrared in a standard false color composite infrared channel is always assigned red colour.  Video 3 Now, resolutions, there are 4 types of resolutions which we discuss in remote sensing, which are used in remote sensing. So, first we take as a spatial resolution, spatial resolution that is theability to distinguish closes space objects on an image, that can be that if you if you search the net or maybe some books, you will find little more complicated and definitions of resolution. But this is the most brief and very appropriate definition ability to distinguish close displays objects on an image. So, the purpose here is that how or in other terms we say quality of image, when we are having a poor quality image, then it becomes difficult to distinguish different objects present in the image. But if I say I am having a quality that means each and every object I am able to distinguish very easily especially, which are closely space. So 2 adjacent objects if they are having different characteristics, reflection or ambition characteristic, I should be able to distinguish as easily as possible. And if I see if I find this situation then I say spatial resolution is high, but the spatial resolution is also defined in terms of the area it is covered in a pixel. So, for example, a sensor if it is covering 10 meter by 10 meter area compared to a sensor which is covering 1 meter by 1 meter area to clear 1 pixel in an image, then 1 meter is having higher spatial resolution. And whenever we say higher spatial resolution, lower space and coarse spatial resolution it is always relative terms. If you recall the brief discussion we had one NOAA AVHRR data when this AVHRR sensor was developed, and it is a full form is advanced very high resolution radiometer. And now see 1.1 kilometer in today's reference cannot be very high resolution radiometer, but at that time in 60s, definitely that was very high resolution. So the high resolution few years back which we used to say for 20 meter or 10 meter is no more high. Now we talk in centimeters. So this is always tall spatial resolution is always top in terms of relative terms 1 and also you can compare with the size of the pixel which is representing the ground. So, if a pixel is representing 10 by 10 100 square meter area and another pixel of another sensor is representing 1 square meter area that means, 1 square meter area is having high spatial resolution. Because it will fit very well in this definition. In 1 meter resolution image, I will be able to distinguish close space objects very easily as compared to 10 meter. Now, there is another resolution which is called a spectral resolution and a spectral resolution that is basically the location, width and sensitivity of chosen this wavelength bands. Basically the width of a band, narrow the band, higher the spectral resolution. Broader the band, lower this spectrum resolution, for example, pan generally panchromatic bands, panchromatic sensors cover a large part of EM spectrum compared to bands like infrared or near infrared or other parts. So, then we say that in that sense that spectral resolution of panchromatic sensor is relatively coarse or poor and because the width is much more in case of hyperspectral remote sensing or hyperspectral images, the spectral resolution is very high because the bandwidth of individual bands might be 1 nanometer or 0.01 01 nanometer or 0.01 micrometer or that kind of thing. So, very, very narrow bands when you are handling within the same part of EM spectrum, then we say hyperspectral but generally what we are handling images are in that example, that they like Landsat we had 0.4 to 0.5, 0.5 to 0.6 and so on. So, there we had the difference of 0.1 micrometers. So, bandwidth was like that, later on those have been shifted, so, location has been changed and the sensitivity also in the band also. Now, the third resolution which we talk is the temporal resolution that is the repeated coverage, how frequently is satellite is covering the same area. Now, this is the or and also we can say time difference between observations. There is a relationship with higher spatial resolution inverse relationship and temporal resolution. For example, a sensor which is having high spatial resolution may have repeatability or temporal resolution very poor. Because highest spatial resolution images will have a very narrow swath and that means, in order to revisit the same area, it may require you in 28 days or so. Whereas, a closer relatively closer resolution sensor like NOAA AVHRR which is having 1.1 kilometer space resolution, it is having very wide swath about 20100 kilometer and therefore, the revisit is 2 times in a day daytime and 2 times in the night time.So, 4 times in a day, it can revisit almost the same area. So higher that spatial resolution, lower the temporal resolution and vice versa is also true, the 4th and last resolution which we discuss in remote sensing is the radiometric resolution, which we have earlier little touched upon basically the number of bits which are used to store the information about an image or a bank. So, if it is 2 bits or 6 bits image it each and it is 8 bit image definitely 8 bit image is having more better radiometric resolution than a 6 bit. So, here higher the number of bits which are used to store a pixel value higher than radiometric resolution. And this is basically you can also say the precision of observation, not accuracy, accuracy and precision are 2 different terms. So, this one has to remember accuracy is a statistical term. There is precision based on the instruments which are used or in this particular case precision here it means how precisely you are recording the pixel values, 6 bits compared to 8 bit definitely 8 bit is having higher precision and higher radiometric resolution. So, this is how my 4 resolutions are in discussion in remote sensing and day to day discussions also in future lectures also, I would be mentioning these resolutions on different occasions Video 4 Now, we come to the this also spatial resolution in some way and we can also say as a resolving power, resolving power of certain objects within a pixel, as you knowthat pixel represent an ariel average of, if I talk in terms of reflection, then whatever the reflection or variations, which we are present within 1 pixel of ground equivalent to ground all have been average down and 1 single pixel value has been assigned to a pixel of an image. So, in remote sensing the resolution or the spatial resolution means, here the resolving power, which is capability to identify the presence of 2 objects. Now, sometimes 2 objects, the image might be a coarser resolution image, but 2 adjacent objects are having very contrasting characteristics still these can be identified. So, one has to also take from that that direction and I will give you example. When we start started using Landsat MSS data at that the spatial resolution was around 80 meter, but it still we were able to distinguish a single railway track and as you know that single railway track is not a 80 meter wide, it is hardly 2 or 3 meter total, total is hardly 2 or 3 meter, 2 meter wide. And because, not only the distance between 2 rails, but also in the surrounding the, the reason was why we were able to distinguish a railway lines because one they were having contrasting characteristics than the surrounding and these were the linear. So, if objects are like that though maybe smaller than the spatial resolution, but if they are having contrasting surroundings, still these objects of smaller size can be identified. So, 2 adjacent objects if they are having contrasting surroundings, they can be identified even a relatively closer resolution images, capabilities to identify the properties of 2 objects, properties here the reflection emission and other things. And then we can try other parameters out of these basically are brightness values. So an image that shows finer details is set to be a finer resolution compared to the image that shows course details. So when we say I am having high quality of an image, that means I am talking about fine resolution, the pixel size is very small representing very small part of the earth 1 meter compared to 100 meters, 100 meter is always a course in relative terms, then compared to 1 meter. Somebody samples are here, like a 1 meter resolution image, 2 meter resolution, 4 meter and 8 meter, as you can realize, again these are the, generated for demonstration. So, the image quality has been deteriorated. So, that we can realize that if a sensor is acquiring an image of the same area at 8 meters resolution, this is how it will look and when it is the same sensor is acquiring this 1 meter this as you can see that 1 meter image. And resolution image is showing much more details as compared to 8 meter, in 8 meter image in this example, hardly you can distinguish anything even on 4 meter resolution you cannot, but on 2 meter yes, you start distinguishing things very quite easily and in 1 meter of course, much easier. Similarly, here this is 10 meter pixel resolution, this 30 meter and this is 80 meter, as you can see the 10 meter because this is final resolution, the image quality is better. And therefore, I can identify different objects much easier. The top example is in black and white or in gray and the bottom example is in false color composite. So, you can identify very easily. So, higher spatial resolution is always better for certain types of application, not all types of applications, because the basic the advantage of remote sensing was that it provides the synoptic view, that means it covers an single image, covers a very large area. And a very large area can be assessed seen and interpreted analyzed in one go. But when we go for high spatial resolution images, then this synoptic view is getting reduced and we are goingnarrower and narrower in case of hyper spectral image is maybe a 5 kilometer strip is being assessed. So, you cannot see a synoptic view. So, high spatial resolution is in some way is going against the original philosophy of remote sensing that is synoptic view. But nonetheless, there are different requirements for different purposes for different applications, higher and higher spatial and spectral resolution images are being used or are being acquired. So, they say further on this spatial resolution that the size of the smallest diamonds on the earth surface over which an independent measurement can be made by the sensor, that also indicates about their spatial resolution. And this size is of the pixel which is equivalent to the ground size in meters controlled by instantaneous field of view that is IFOV. So, here IFOV is that means that if it is a narrow, it is very narrow this solid angle then your pixel is going to be higher spatial resolution, but if this IFVO is large, then you are going to have a coarser resolution. So, the example is given here, this angle this IFOV in this case is smaller and I am getting a high spatial resolution image here, whereas, in this case IFOV is larger and I am getting a coarser resolution image.Here, it is also demonstrated, when you are going for the relativity coarser resolution images, then the another then the sensor is directly looking downward, it acquired a perfectly squired area representation in pixel whereas in case of oval direction or when you are having IFOV only due to the curvature of the earth as you can see here, then this is though the recording is in the square say to pixel, but the area of the ground which is it is being covered is not really a square or rectangular but say that is one kind of situation. So, I have IFOV also equally matters as a spatial resolution, but both are linked together. So, IFOV is the angular cone of visibility of the sense that there is a solid angle determines area seen from a given altitude every given time like here this shown and the area due to IFOV at the altitude also and the known as ground IFOV is also known as ground resolution cell GRC or element GI. So, all this is there as you can see that you are having this B area also you are having the C area and A is the sensor here. So, likewise it is now a spatial resolution again it is a link with larger than IFOV closer the spatial resolution you would have so very popular figure in many remote sensing books or literature, which illustrates the geometrical instantaneous field of view reconstruction by projection from a pixel or on image plane as you can see here. o, this is but remember IFOV is a solid angle. Now, so, finally we can you know put a proper words about the space that is spectral resolution. That is spectral resolution of a sensor refers to the number or location of spectral bands and the sensor collects data in and how wide those bands are, narrower remember narrower the band higher the spectral resolution and the broader the band wider the band, lower the spectral resolution.