Loading

Module 1: Image Filtering and Classification

Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Video 1

In this one we are going to discuss three demonstration or 3 techniques which we have already covered in our discussions or in lectures. One is the convolution filtering I will be showing how you can edit the available filters in a given software. And then principle component analysis and one more technique is the decorrelation stretch. So first I will go to this demonstration. In this software as we have been using this image I will continue to use that is spot image of Moscow and then and what we go for checking this filtering technique and in this one we go here for a your home and then multispectral and this filtering option here. So we will go to filtering options and here we go for conversion convolution filtering. So lot of already made filters are there and whenever I choose any of these on this right side on the image itself and the effects are taken and the preview is shown basically. So if I accept any of these this is what I will get I will be getting results for example like this. So
I can choose any one but the I want I would like to show that one can also do like. Suppose if I go for edge enhancement and using a 7/7 filter this is how the 7 this filter looks. So because there are say it is a large filter and the center value is of coarse +97 because this is high pass filter. So the local variation would be highlighted and regional variations are reduced therefore all other cell values of this convolution filter are negative or -1. And so but if I want to change the size here I can design my own filter. So I can change my size like from 7 automatically it will go to 9 and sorry 9 and then 11 I can choose 11/11. And then I have to provide the values for all these things. And once that is there I can execute the this convolution filtering. So lot of options are there already in the software. So different softwares of coarse will provided a different kind of filters and one can also design and if I see the example of low pass filter that too of 11/11 as you can see here these are 11 rows and 11 columns are there 11 columns are there and in the since it is low pass filter so every value is 1.So lot of averaging will be done and then you will get results accordingly. But if you want to see a quick review of what you are going to have on the right window automatically it comes very
quickly here. So that is the there are other well know well established filters like Sobel 1 2 and 7 by 7 5/5 and directional filters all kinds of possible filters which one can think are already in built. But a still one required as per his requirements can also added design his own filter and then execute things accordingly. Now we will be on the same image we will be looking this your
principal component analysis. So when we go for principal component analysis we what we find here that this is the option which is available through spectral buttons. And here we can choose whether I can to perform on the whole image or on a small image and then I can of coarse I have to provide and the place where I will keep. So I am keeping here and then I can say that coordinator from bilk from where they will come so these details can also be provided. I will say that so the you know in the session log and also write these eigen matrix and eigen values like this so these can also be written in a location which I will provide. So location I am providing is temp here and same here I am providing the location temporary folder and the C
Drive ok. And once you go you can also open the log later and can see the matrix as well. So I will so ok if it is a big file once can go in batch mode or one would like to perform on multiple and there are some other options are also there. So for time being I am not of coarse how many components are desired since it is a three bands false color composite. So all three
components are desired from this. Now processing might take some time and it will keep showing us the status. And once it is done then you can see individual components as well. As you know that in principal component at the maximum variability will come in the component 1 and in component 3 you might be having the minimum variability or just noise component might be there. If we would have say 6 bands scenario like in Landsat there TM and
then then we can you know not including bands thermal band. So I am saying 6 otherwise there are 7 bands.So if I take this 6 bands scenario then the last band I may go for 6 components and last component swill have completely noise there. So likewise, I can get those values but very easily. Now I will go and check that where it has kept the coarse and these files would be there but let me let me go for this these are the two files which has it has created. And if I open these files I will find maybe notepad I can open and then I will find these both eigen vectors and eigen matrix also. So one can get the statistical information about the principal components. And secondly of coarse I will get the image here and so lets have image also. So I open that one go for this and this is the image which has been created here. And this is the and these components are here and I can of coarse get different images and there are so I may get just the one layer though it is assigned red color does not matter. I can change my multi spectral I have all layers just in one and this is component one which is having the maximum variability. If I go for component three then I go like this and it will have minimum variability as compared to the component one. So likewise you get not only the eigen vectors eigen matrix but also you get three principal component you can always make color composites also and like I will be doing here and you get
such result which are of coarse the interpretation part of a composite principal component composite is going to be different but it will provide lot of variations in the pixel values and one can even go for classification of such a composites or can use for other interpretation or discriminations of different objects. So likewise principle component analysis can be done if image is larger it may take lot of time but if images is this image was not of very big size so it took only about a minute and it will be get been information very quickly. Now I will go for another technique which we have discussed also. So again I will go roster spectral and then decorrelation stretch. So in decorrelation stretch I will so again the image here that the same Moscow spot image which I am taking and where I will put this image that has to be provided so I am putting in that temp and just ok. So now I have decided my output and if you want you can also create output in
floating point or whatever but for time being I am choosing as the default value and I am asking that it should be stretched also. So that I get a full thing on here coordinate system from where these will come. So let it
continue as from whatever they are there because it is already your reference so I need not to make any changes here and then I go for decorrelation stretch again and it will take some time is initializing and then it will create again a lot of variability because recall in principle component analysis when you see in principal component analysis what is done basically two steps is there. The origin is shifted to the cluster where you are having then it is aligned the axis are aligned as per the maximum variability present within one cluster and that becomes your principal component one and perpendicular to that alignment that axis PC1. There will be a another component which is PC2 principle component 2 and perpendicular to these two components there will be another that is principal component three likewise. What we are doing in the decorrelation stretch we are further stretching these compo principle components and trying that the images the pixel values so to occupy the maximum space available within that three dimensional space or in a full dynamic range. So likewise we go for
more variability and present in the images within this this image. So we will now add that image and this is decorrelation stretch image as you can see here I will switch of these one and take this one here and of coarse I change these colors and this is what you are getting here and you can chose the different schemes here ok. And yes now just compare here this this was our original image and this is decorrelation stretch image. So much discrimination among objects can be made as compared to simple your false color composite. No matter that image whatever you will enhance will not reach to this stage as you can do through decorrelation stretch. So decorrelation stretch will create a lot of colors a lot of variability because these principal component access are being stretched. And more they will they are going to they are occupying the maximum space which is available
for each band or per each component between 0 to 255. This is of coarse a 24 bit scenario that means for 8 each 8 each band we are having 8 band and 8 bits (()) (15:28) for per pixel. So in total 3 3 each components are there or 3 bands are there and therefore you are having 24. But the output as you can see that I will again just demonstrate you and this is your original image hardly you see the discrimination and once you bring this decorrelation stretch which can be done very quickly and you get beautiful results. These images then go for classification or simple image interpretation and in that way and the reliability of your interpretation and classification can increase very significantly.

Video 2

And now we go for another demonstration and that is on this using this public domain software that is DIPS because here we can develop better understanding of the things and then for real processing you can go for commercial software’s. So today in this particular demonstration of using this digital image processing stimulator first I am going to show you the density slicing which we have already discussed when we discuss the
unsupervised classification and density slicing and as even as a separate lecture. Now as you know that the main purpose of as of classification or as density slicing that there might be lot of variability pixels having large range or in a continuous image we want to discretize and create just few classes and likewise we can do it.And this here in this demo through this software we can see two either we can see data view as being displayed or we can see image view. So let us keep the data view and because we want to
understand that how this is going on and then we will try to add different classes here. So if I quickly scan and then what I find that in this image the values are lowest values are here is 10 and the highest value is I thing 68. So we will slice down this image into few slices. So what the first slice which we are going to choose is between 10 and between 10 to this between 10 to we will have a 30 here and then add values here. Then we go for another slice
which is say 31 to having 20 range here. So 50 and then we will go for another slice that is a 51 to we say 70 and likewise I thing we have completed all three. And then when we select this one then we say ok slice down this image. Now you see that this image now we can see the image
view and also a sliced view. We have just for simplicity we have created only 3 slices. One can create more slices or more classes if prior information is there as you can see that the entire image which has lots of pixel value range starting from 10 to 68 being sliced down only into three classes. So this is how and this is an density slicing is performed on real image. This is
a stimulator just for demonstration but it clearly bring out the concept of density slicing here which you can see. Now the next thing which we are going to see is the classifier and here what we will do we will create these training sets or signature by choosing different. So I when I put the cursor on this image or these values I know. So I say that this is going to be my one training set and then I say if any value which is more close to this then I can also include in my training set like this one of coarse. And if I want to include 9 then 10 is also there so I will just at two locations as I was mentioning when we were discussing in the theory discussion about supervised classification I said within every image it is always better to choose at least two signatures or two training sets. So one
training sets I am choosing and then I will say put a value here and say that this is a bare ground all right. And then so now I get our minimum maximum which I know because only three pixels were there mean and standard deviation statistics also available then I choose another set another training set by you know clicking here create new training set and I say that this black area I want to be as a water body. So one training set here and these values are also quite closed but so I choose these as a training set and then say save signatures here and then save water body. Again I get the statistic off my training sets and training set water body. Now again I go and create one more class here or one more training sets using these grey value which are arranging between 66 68 and likewise. And one more training set using not this one may be ok may be this one and this one. So two training sets for this particular class I have chosen. And I as per my prior information I am saying that this might be a agricultural land and then I get this one. Now last one more training set I will choose using these and the value here the light grey values here which are having value say 91 80 93 89 one training set I have chosen. Another might be in this one 89 and this one is going very far. So may be 70 and these two and then save ok. So this might be my save forest. Now four training sets have been chosen now I say that ok stop collecting and classified. Two methods are supported here in this demonstration software one is the MDM that stands for minimum distance to me which we have already discussed in previous discussion on and this supervised classification and second one is parallelepiped. And so in theory class we havediscussed two types of standard parallelepiped or precise parallelepiped. Here only one option is available so I go for minimum distance to mean and when I go for classification this is how I get my classification results. All the values which I have assigned are showing different classes likewise. So I choose here and I know that how these calculations are being done that one can also check for individual classes
and see that what kind of statistic is going on. I think something went wrong no it is fine. So now I get I can keep checking my classification results very easily by using these methods. Two methods are available I can go for another method again classify and say ok number of standard deviation I say just keep one and this is how the as a discussed in the theory class that to
same training sets same image input image but 2 different algorithm in supervised classification two different results. So one has to be very careful here many pixels went unclassified because I threw this demonstration I did not exhaustively collected all training sets. I should have gone for 7 8 at least training sets then probably the entire image would have classified in different classes. But nonetheless in real software you can add few more training
sets and then again go back and make corrections and then go for classification till you satisfy with either matching with other results or doing the accuracy assessment part. So those things can be done quite easily. So we can complete this demonstration on classification and density
slicing.

Video 3

Now the last one which is very important in case of classification is about accuracy assessment. So this accuracy assessment is a first as a top one the top image which you are seeing is basically the classified image and which has got five classes water, urban, shrub, forest and barren land.
And these classes are there now we go for accuracy assessment how accurately these classes have been classified on the same image which we have been using. Now do you want you know that how many percentage of references should be taken whether 50% from this total ground reference or prior information which we are having or 10%. So if I choose say I want just 10% selection because a large that I will go though I may get different
results but it take more time. Amway so I am going for 10 percent and then choose ok then identify simple pixels. So now there are 77 random pixels on classified image have been identified with reference corresponding to the ground image which you are seeing here. And then finally create error
matrix. So here we get the error matrix which requires little interpretation here and that is say two dimensional table basically. So here again you are saying water are one forest all those classes are there 5 classes as were here and then again those 5 classes are there and what you are seeing the that total of column or row or diagonal axis is all 5 and total accuracy it is saying 71.43 is here.So this kind of quick assessment can also be done in professional or in commercial software as well. Now instead of this error matrix I can also go for kappa coefficient and can go for calculations again. So I get only this much of and that I will know keep moving through this and
I get the values against this row which is a total column and then now a row. So for each you know cell of this two dimensional table the calculation is being done as you can see and this is what now results will welcome.
So I say ok calculation is completed. So in standard deviation the matrix summary which is saying that all they said it is and the diagonal sorry the diagonal sum is 16 which we are getting and the grand total of all these kappa coefficient calculation is 49. And again this is likewise how this calculation is done it is that the observed minus expected. Expected is coming from and these 10% of selection which we have chosen at very early stages. If we go for 50% I expect that our accuracy will improve further because there are less more chances of getting more errors if number of you know random points or random pixels are selected to compare with the classified image. So this is this ground reference image or maybe ground after ground truthing or maybe through some other map. So here some other map has been chosen and this key kappa coefficient comes around 0.576 which may not be very good. But in this demonstration we have done it.
Let us try again the same thing with 50% and see what happens with this one. And 50% we are going instead of 10% sampling and then identify pixels. So all those now 32 random points that means the this one is having total 64 pixels in this sample image. So 30 to 50% of points have been selected roughly half randomly they have been created. Now we go for this matrix but what I was saying it is not exactly like this the accuracy has reduced there we are getting this total percentage of accuracy was somewhere about a 71 now we are getting 62.5. So it is not always necessary and that you get a high accurate results. And of coarse the same way you go for kappa calculation and likewise keep clipping for all cells and then you get this kind of values are there. This is point so the value is and the accuracy part has definitely has gone down. And so well this is in that case in that example sometimes you may get better results as well.So whenever the classification is performed either supervised classification or unsupervised classification a accuracy assessment must be done and in your results in your report a statement should also be there that how accurately the classification is 0.625 is a not a very good classification which has been achieved but nonetheless and this is how one has to report. So that the real pictures comes out after the classification. When any image which is having lot of heterogeneity in the pixel values your accuracy is going to be reduced or very less. However if an image which is very homogeneous I will give you example suppose you are working in a dessert area or in a dense forest area or in a snow covered area there most of the pixels might be having a very near values same values close values. And your classification will achieve very high accuracy. But if you are working in area where a lot of heterogeneities are there and say maybe in agricultural land and small agricultural plots are having different kind of crops and their spectral characteristic are going to be completely different. And therefore when you access the accuracy part you will not achieve a very good accuracy. So homogenous correct images which are having homogenous characteristics or pixel values are closed by then you may achieve a very good accuracy.
In a heterogeneous images you may not achieve very good accuracy. So and this bring to the end of this and this discussion about this demonstration. So first what we through this commercial software and we I did the demonstration about this convolution filtering how to edit how to design your own. Then a principle component analysis and how to see the matt you know access the matrix and individual components you can see combined composite also you can have. And then decorrelation stretches in comparison with the original image and also finally and through this DIPS software we have seen the demonstration of density slicing. Similar way you
can do it on commercial softwares. Secondly it is a classifier and third one about accuracy assessment. Full time all the time I am not using commercial softwares because purpose here is not to promote any software one and I am I would prefer to use DIPS wherever it is possible because it is first of all not commercial software it has been developed in a academic institute.It is better to first understand what goes behind in each processing in commercial software and that can be only understood through such software like DIPS which I have been using. So this bring to the end of this demonstration thank you very much.