Hello everyone and welcome to new topic which is SAR interferometry are in short we also say InSAR. When we have started discussing about active microwave remote sensing at that time once or twice I have referred the SAR interferometry or InSAR. So, this discussion is in 2 parts. So, first we will be discussing about the technology itself that how InSAR works and then we will be seeing some examples or applications of SAR interferometry. Basically if we see the, what is exactly interferometry that in what we do in SAR interferometry radar remote sensing that we observed the area, same area from slightly different look angles. So, when this is done then you are having a pair basically. So, this can be done simultaneously like in it has happened in case of SRTM with the 2 radars mounted on the same platform through a shaft or it can be done. With 2 different timings and through the repeat orbits of the same satellite, same sensor over the same area only there will be a time difference. So, if I take example of say, ERS satellite
or ENVISAT satellite, then these satellites used to visit the same area after 35 days. So, then you can have a pair with the slightly different look angles of the same area through same platform and then that pair then can be used in interferometry. So, a pair is definitelyrequired. So, SAR interferometry basically allows us to measure, because radar remote sensing is a radio detection techniques it basically allows us to measure of the radiation which travels through because of its coherent and this plays very important role the coherent which we will be discussing further on this. And this measurement of travel path, which is once the pulse or energy, has been sent by the sensor. And then the variation which is occur in between or function of satellite position and time of acquisition, which allows the generation of the most common product of SAR interferometry is digital elevation models. This is what exactly done in case of SRTM Shuttle Radar Topography Mission that 80% of the globe was covered except polar regions. And these pair we are then analysed, processed, and then finally, digital elevation model at
different displays and resolutions, we are created for almost entire globe and also apart from allowing to generate digital elevation models, these pairs will through SAR interferometry, we can also get the deformation information of centimetre accuracy or even a fair processing is done and other conditions are fulfilled. Then we can get even millimetre accuracy in our measurements or estimations of ground deformation or surface deformation of the terrain. So, there are 2 main applications one is of course digital elevation model, another one is to detect the deformation which might be called due to may be landslide, may be subsidence, may be earthquake and there may can be many factors. So, now we will be seeing that how these and a pair is required. In this figure what we see that satellite is passing through this orbit 1 and having a particular look angle and looking the same area as you can see that it is empty, now footprint is shown here. And these directions are the basically the slant range of different orbits. So, if I talk about the first orbit, then this is the slant range. Then there is a second orbit and then slant ranges here with 2 different slant ranges but all look angles. But the area which it is being it is covered is the same. So same area is covered through 2 different orbit or orbital passes having little might be little different look angle. And the perpendicular distance between these 2 these slant ranges that is called the perpendicular baseline. Baseline is very important to drive either a highly reliable digital elevation model or accurate digital elevation model or grounded formation also. So, the baseline should be too optimum in length. If it is too big, the number is too large then probably we will not have the coherence between these 2 scenes which will be acquired by these, set this satellite through 2 orbits and then we cannot generate interferograms. So baseline becomes very important. Now, this is basically
the flight direction which we are seeing. So, this arrangement is their InSAR interferometry. And the satellite or the sensors would be capable of acquiring the data of the same area, maybe little through different look angles on successive orbits that is the thing here. So, the distance basically as I was mentioning, that is the baseline perpendicular baseline and there is also interferometry baseline. So, the basically the distance between the 2 satellites, as you areseeing in orbits in the orbit 1 and 2 in the plane, which is perpendicular to the orbit is called the interferometry or interferometer baseline. And whereas the projection which is perpendicular to the slant range as I mentioned earlier, that perpendicular to these slant range that is called the perpendicular baseline. And this perpendicular baseline as I have already told you is very important to derive interferograms from repair. So, SAR interferograms basically are generated through this processing which is called cross multiplying and that too pixel by pixel. So, pixel of orbit 1 scene and then pixel of orbit second scene. And so, that means, that co-registration of very high quality is also very much required in the SAR interferometry. So, using this say cross multiplying the first SAR image which we with the complex conjugate of the second, this is how it becomes. And one image
we can take as a master image another one we can take as a slave image.
And by this in this example light, the first image with the complex conjugate of the second. And then interferogram, basically the interferogram amplitude is the amplitude to the first image multiply by that of the second one. And whereas its face the interferometric phase is the phase difference between the images. And this phase difference is basically exploited in case of ground deformations whereas, this amplitude is exploited in case of digital elevation model. This we will see further. Now, we take some examples of different sensors on board of different satellites, some are still functional, some have completed their lifetime. So, if I take the example of say ALOS
PALSAR which is a Japanese sensor, it was in the L band and wavelength is 23 meter. whereas the most common band in SAR interferometry which is used. It started with ERS, then ERS of European Space Agency, then RADARSAT of Canada, then ENVISAT of again European Space Agency, ASAR is the name of the sensor. Then RISAT does not have the capabilities of acquiring interferometry data. However, the sentinel is having capability of acquiring data. One more advantage with Sentinel one is that the data is also available free of cost for any part of the globe, wherever the interferogram the pairs or interferometry pairs are available, those will be available once the data has been acquired. And through internet, anyone can download. And another thing is one would like to do further in this one, then the processing software is also available on through this European Space Agency. So, you get the data free of cost, you get a processing software and you too can generate your own interferograms maybe a developing a digital elevation model and high spatial resolution or you may study grounded formation induced by many factors. There might be X band, which is quite close to the C band, which is 3 centimetre and TERRA SAR-X is there and other are there. Some satellites are in the military domain for which we do not have generally lot of details available neither the data.
Now which image how image should be selected or the dataset. The pairs would be selected for SAR processing, while doing this selection what are the things we should look in that. So, the first step is basically and the selection of SAR images, which should be suitable for interferometry and the too big which be and go we take them this pair for processing. So, the
key steps in this criteria which is adopted for the selection of a pair have a strong impact on the quality of our final results. And these criteria, which we are just going to discuss, will depend upon the specific application for which SAR interferometry images are required. That means, if I am going for digital elevation model generation, then I will look these criteria little differently. When I am going for ground information study then I will look these criteria little differently. So, 2 most important SAR applications as I have already mentioned, one is the digital elevation model and another one is the differential interferometry or detecting the ground deformations. So, in order to have the best results from our analysis or SAR interferometry data, the
following parameters are important. The first one is the view angle and this we will discuss further on this aspect that ascending and descending passes, because half of the globe is covered through the ascending pass and half of the globe the other side of the globe will be covered by descending part. So, can are these passes making any difference in our results that is the view length through view different view angles we will be seeing in or we will be discussing about this part. Then geometrical baseline as just I have discussed the perpendicular baseline is important that part we will be also taking further then temporal baseline the time difference between 2
scenes. If time difference is too big, there might be some changes on the ground. And whether those changes were expected, or they changes which make, create problems for our analysis that one has 2 scenes. Like for example, if I am using the SAR interferometry technique to detect the ground
deformation induced by an earthquake, then I want the closest scenes 2 scenes in between the earthquake might have occurred. So in that way, generally these overpasses time or repeat cycle or temporal resolution of the satellite is 35 days. This is how these orbits have been designed. So in between, if 2 scenes I am having one is pre-earthquake, another one is postearthquake then I will have a lot of control or confidence in my analysis.
So temporal baseline, so not be too big. Otherwise, the level of confidence in my results will be very less. Time of the creation of data, and generally the time here means and not the time of the satellite overpass. Because and we are not operator of the satellite be for us the data is being acquired and the overpass time over area is fixed. Time of the acquisition means that
basically here the season and that when supposed to pre monsoon and post monsoon. Now vegetation condition is going to be completely different in case of Indian terrain. So, that thing would also be kept in mind while selecting the data pairs. So, timing should be seen in that perspective. Then the coherence, of course the coherence information will only come once you have started analysing the data. So, in very first few steps, you can have assessmentof coherence, we will also discuss this part in little detail, maybe in this discussion or maybe in the next. Then meteorological conditions, meteorological conditions sometimes can bring the changes because, what through SAR interferometry is basically ground deformation study. We are
looking the changes of millimetre or centimetre scale. And therefore, any big differences between metrological conditions of 2 scenes or pair may give little different results and therefore, it is a basically how to attribute that those changes are purely because of ground deformation rather than changes in the metrological conditions between those 2 data acquisition dates. So, that also if that information is also available that should be utilized
while selecting the image. Now first about that ascending and descending passes. So, as I was telling that ascending passes when from south to north, the satellite is going and as soon as it passes over the poles then or near pole it will go on the other side that is going to either descending. So, half of the globe will always be covered in ascending mode, other half will be covered in descending passes. So, however these things can bring little changes in our analysis. So, this is what we will look through this figure then when we are having the ascending overpasses. Then what we are seeing here, this LOS stands for line of sight in which direction the satellite is looking towards the north. So, the changes from far from the satellite that means, say the changes, if no changes in the color that means there is in, this I am talking about the fringe map or interferogram map, which will be generating after the processing. So, if there is nochange in color means I am not observing any fringe, which is shown in different colors. That conclusion is that in between those 2 dates, and no change has really occurred. If I am seeing some changes in the colors, a single fringe I am seeing here observing, then there is a change in the line of sight. So, line of sight means if the ground either has gone down or a poor example here, the subsidence will bring this kind of change in the ascending overpasses. Whereas in case of descending, this same subsidence may bring changes something like this. So there is hardly any difference however, when we see the change far
from the satellite. This LOS change then the multiple if there are multiple fringes, then the fringes will have different colors which you can see here the pattern. If I go from left to right, then I am having different colors cyan, magenta, and yellow. Here I am having yellow, magenta and cyan. So,
that way I can see the changes due to the ascending and descending passes. So, no deformation, no fringe, no changes in color, some subsidence, some deformation, I will see a big fringe. And then I in the both the cases and it is very difficult to see any changes due to ascending or descending mode. However, when changes are like this, like a thrust or something, and which
is in East West direction, the deformation has occurred due to some earthquake event tectonic event. Then the, I will be seeing the changes in the colors of fringes like this, that cyan, magenta, yellow in this case yellow, magenta, cyan. Now, this is the line of sight as you can see and this is the direction of the satellite which is going like way. In this, whereas in the descending mode, of course it is opposite direction of sight and line of sight in this direction. So, when the satellite is looking this way or satellite is
looking this way that may bring because the same structure of the earth is being looked to the 2 different look angles this through these ascending and descending passes. And therefore, they will have different interferograms which you can see here. One important point I would like to also mention when ground deformations are in a large area, but small. Like the middle example that is subsidence, that a large area is getting subsided, but the
magnitude of subsidence is little. And you would see the fringes, very open kind of fringeslike you are seeing in this middle case. But when deformations are occurring in a small area, but magnitude wise they are very high. Like in the bottom example, then these fringes will be very close. And also by counting the fringes and divided by half the wavelength by 2. So, if wavelength is say 5 centimetre or 5.6 centimetre then 2.8 the value, which is half of the wavelength and number of fringes. So, in this case if I say that this is a C band and data acquisition. I have got 2 clear cut fringes in the bottom example. And I multiply by 2.8 that means the deformations have occurred of
equivalent to 5.6. So, in that way the magnitude of deformation can be assessed can we estimate it based on the number of fringes which I get. And in how much area that also is can be seen here, there is only one fringe in the middle case. So, I say that only deformation and the 2 is subsidence of 2.8 centimetre. Whereas in the bottom example, I am seeing such you know deformation in the line of sight. This one has to remember this is deformation is line of sight. Because the ground either has gone down or up in the line of sight of the data acquisition. So, in the bottom example, the deformations are of magnitude of 5.6 centimetre. So, these plays very, very important role while assessing the quality. Now, another important
aspect is the coherence. Like in this example, which we are seen that wherever we get these clear fringes only special kind of characteristics in interferograms. If you see a very larger scale and there is a 1 fringe which is very clearly visible. But the rest of the areas you do nothave any fringes except the special characteristics. So, these are the incoherent areas, which
are marked here. And the coherent area is this one, which is fringe just one single fringe. So, areas basically without clear fringe pattern represents in coherent areas. Now incoherency can be occur because of certain reasons. So, the basically the same pixel, because we are having now a pair. So, the pixels should reflect radio waves in almost in same condition between 2
observations of 2 pixels from 2 images. So, that should have the same you know reflect radio waves in almost the same condition. When the displacement within one pixel is uniform, then good coherence can be obtained. But, when this is not there, then you do not get good coherence, like for example, in case of water surface interference cannot occur. Because the water surface varies with the fluctuation of water with the time. So, if there is a 35 days time difference between 2 images, which is very common, then the water condition, the surface conditions of water will be different.
And therefore, we may not get coherency in our analysis of these 2 scenes. Even if a displacement within a pixel are small for extracting from 2 images. But the ground surface is inclined like in mountain areas, then interference cannot easily occur when the distance between 2 observations, positions of the satellite is large. So, these are some of the reasons which may create problem in coherence to get coherent. And once if you do not have the
coherence in the scenes, then interferograms cannot be derived. So, if in like in this example, on the left figure in coherent areas are there, but they are in one side and you know they are very clearly coming up. Whereas, coherency has been observed in a large area, though there is only hardly we can count only one fringe. So, one fringe is generally cannot be concluded that the ground deformation has taken place. Sometimes the errors, which may contribute up to 1 fringe. So, that can be in that, if I start interpreting this
result interferogram, then I can ignore. If I am getting in a large area, one fringe which is a spread in a very large area of the scene. Then I can say that this might be due to some errors that might be because of metrological
conditions or might be in processing or other things. So, this brings to the end of first part ofSAR interferometry. In the second part, we are going to discuss applications and other intricacies in it. So, this brings to end of this discussion. Thank you very much.