Hello everyone, and we will continue our discussion on active microwave remote sensing. In the previous discussion we have had some points related with microwave remote sensing on part 1. Now, we are going to have little more detailed discussion on microwave remote sensing especially related with errors which are associated with the datasets of microwave or raw data. And this is the one error which is one cause or one condition in microwave remote sensing is that it is not now they are viewing remote sensing. It is oblique or slant remote sensing and therefore, if the sensors are looking sideways, then there will be some problems especially associated with hilly terrain. And this is what we put in the category that is a slant range and distortion.If we see here that this is the sensor and when the pulse is sent there are terrain is also shown here. This is the terrain and if there is a hill then first of all there will be a problem about the shadow. Because one side of the hill is illuminated, whereas other side is not illuminated and that will go in the shadow. And when we see these datasets or images, microwave power images, we will not see this part of the hill.So, that goes in the shadow as shown as a dark thick line. Whereas, there will be 2 more problems associated in hilly terrains of microwave data. One is foreshortening, another one is the layover. And these will also bring the distortions especially in hilly terrain. So, these things, which we will be discussing in detail also and you know that height sometimes say which is measured and this is the geoid surface. This is a geophysical surface estimated and where is that real terrain is shown as the top layer here. And in here you are having the ellipsoid which is a mathematical surface. So, height when it is calculated, using radar data especially in the SAR interferometry technique, where we are
prepared digital elevation models. These things, these distortions that means the slant range distortions will play a very important role. One thing we have to remember that these the objects which are closest to the sensor are called near range. And whereas, the objects which are far, are also they called as far range objects. Because generally in the center, we take the mid range, mid slant range. So, anything which is very far from the center is put as a far range, otherwise it is in the near range. So, the slant range distortions are occur, because the radar is measuring the distance to feature or ground object in slant range, rather than the true horizontal distance along the ground. So, it is not measuring neither vertical distance or nor horizontal distance what it is measuring the slant range I understands and that is why and this problem arises. And the result of this slant range distortion is vary with the scale image scale and especially moving from in or near to far range or moving from center to away or margins of the dataset. So, this is a varying as per the scale goes or within the scene. Now, these slant range
distortions, 2 distortions. The foreshortening and layover. So first we take foreshortening and when it occurs when the radar beam reaches the base, base of a mountain or tall feature and the result is that it is you feel that it is tilted towards the radar or the sensor. That is example the best example here in case of a mountain situation. So, this kind of displacement which we see which will bring and this feature in the image is a short in size or in length rather than the real thing and that is what we see is the foreshortening.
For example, here we will be this part will get a very small registration in the image only up to this much. So, that means that if I take the centre of this image, then I get only this part recorded here in the image, not the entire length of this mountain. So, this much part will go we consider in foreshortening, and so, this kind of displacement in the image is there. That
means that foreshortening occurs when the radar beam or pulse reaches the base of tall feature tilted towards the radar but which reaches before basically the top. So, the radar beam will reach first here, because this is having far distance that will reached later. So, this will registered first and therefore, the feature will appear shorter here. Same thing here also is explained. That if I take these 2 points and plot here a dash b dash, so
instead of having this kind of length which is should be ab, it is getting a dash b dash and a dash b dash is definitely short of line ab. Whereas on the other side and I am getting b c. So, ab = bc, I also get registered b dash c dash which is equal to the abc or ab.So, in one slope of the hill is recorded short and that too in the front direction. So, that is why it is called foreshortening. This is a very common error of images or data radar data of hilly terrain. And if we see in the real images, this is what we see. And therefore, the usage of such images or interpretation, use of these images including analysis becomes very difficult. Sometimes, images of hilly terrain like Himalaya highly rugged terrain. When we acquire or when we see these radar data especially the power image. This is what we see and then interpretation becomes very difficult. One slope is getting fully illuminated other slope is not getting illumination it goes completely inside of one.
Second is the base of the mountain or hill is getting recorded first, whereas top of the mountain is getting recorded later. And this in this situation, the slope gets very less recording in the data and therefore less compared to the real one and therefore, it appears in there. And the images will appear like this and this phenomena we call as foreshortening. Now, there is another associated phenomena again in the hilly regions and that is the layover
phenomena. So, it occurs when the radar beam reaches the top of a tall feature or a hill before it reaches the base. Because there might be a situation, because this all these distortions are being caused because of slant range and the radar is looking sideways oblique direction. And a
situation may come when top of the mountain is getting recorded first then the base of the mountain. So it is just reverse of your foreshortening and this will bring as you can see here, that ab is here, b is getting recorded first, as you can see here, a is getting recorded later. So it is a layover and that means it will create the problem in the dataset. And though the distance between b and c here, b and c will remain same, but the distance between a and b
gets reversed the return signal or the backscattering from the top of the feature or top of the mountain hill, will be received before the signal from the bottom is just reverse of foreshortening. And the result of this would be that the top of the feature displaced towards the radar or the sensor from its true position on the ground and therefore layover. The base of the feature that is b dash to a dash. And this will create a again distortions in the image slant range we put in under the category of slant range distortion and the images might look like this. These are the near range. Here you are having the far range and this the situation effect and due to the layover, which is
dominating in the near and far range parts of this image of the mountain areas. So that the challenge basically with the radar remote sensing, active microwave remote sensing is in hilly terrain. Because of undulations or ruggedness, if it is like Himalayan mountain system, then the slant range distortions namely layover and foreshortening will really dominate. And then processing analysis and interpretation of such images becomes very difficult. Now, let us see some more other slant range distortions, shadow I mentioned earlier. That shadowing effect which will increase with the greater incident angle. And here the incident angle is shown as phi and just as our shadow length as the sunsets as in case of our normal remote sensing. So here if illumination is coming from this direction.And then this is the, basically we say is a wave front. So, this part is getting eliminated of hill
without any problem, where is this part of the hill, the other side of the slope is going completely in the shadow. And therefore, it will bring distortion in the images. As you can see here, that in this folded Sandstone Beds of Malaysia, this part is getting completely in shadow and we do not get any information whatsoever of this region. And that may create a problem in our analysis, so shadow will also occur in a hilly region. If I compare with a flat terrain, like indo gangetic plain or maybe some desert areas, layover or foreshortening or sandstone issue might not be there at all. And if they are there will move
very minor and therefore, radar remote sensing, active remote sensing and can be very useful in such situations. But especially in hilly terrain, it is challenging to use.
Now, we use the term SAR which is synthetic aperture radar, basically a radar is synthesized in a phase because in a phase we cannot have a large antenna. And therefore, this concept is used that digital images of SAR image, that is synthetic aperture radar image can be seen as a mosaic. For example, 2 dimensional array form by columns and rows as in normal raster data or normal images of small picture elements, for example pixels. And each these pixels are associated with a small area of our surface and that is called resolution cell or a spatial resolution. So, these cells would be in a square in shape and they would be representing a part of earth that maybe 10 meter by 10 meter, 30 meter by 30 meter like that. So, these SAR images, this is how they are created and each pixel gives a complex number. This is very important in case of radar remote sensing,Because in normal remote sensing, like in passive remote sensing and visible infrared thermal infrared, where the pixel value is just representing either reflection or emission from a part of the surface ground. Whereas, in case of SAR image, radar image, this, the cell is representing a complex number and that complex number is carrying. Basically the amplitude of the wave and the phase information of that wave and which has recorded that cell. So, this complex number is made of amplitude and phase information about the microwave field by that is backscattered by different scatters, different objects. For example, there might be here rocks, there might be vegetation, there might be built up areas like buildings, any object natural objects or manmade object which are present on the ground will have their different behaviour in terms of their backscattered in the microwave field. So, basically what we are recording in complex number as amplitude and phase information. One application which is SAR interferometry in which we exploit the phase difference. This phase information is exploited and the phase difference between 2 images of the same area taken on 2 different dates are analyzed and we get the change which has occurred between these 2 dates. So, this that thing, information is acquired through phase differences. Now, different rows because there are columns and in any raster image if we see in form of raster, then there will be columns and rows. So, different rows of SAR images are associated
with different azimuth location. Because, we are recording the data and not in other way but slant range. So, therefore, they will be representing different angle, that is azimuth location. Where is different columns will indicate the different slant range locations. That is why it is said as a complex number.
And these not only the cell represents the amplitude and phase information, but these rows and columns also represent and the rows represents basically azimuth information or azimuth location, whereas different columns will represent slant range locations. And the radiation which is transmitted from the radar which reaches to the earth or scattered
or objects of the ground and then come back to the radar in the form of SAR images or 2 way travel. So, it records these 2 things, now is scatters of different distances from radar because on the ground, things are located maybe differently and that is different slant ranges introduce different delays, between transmission and reception of the radiation. Because each object or a scattered will behave differently and therefore, there might be delays, if there is a water body or is the vegetation or erstwhile, all will behave differently, all will send different signals. So, if we see in this sense, what we see basically here that the data, of course it is in the waveform and 1 2 peaks of this wave be associated with the wavelength lambda and one whole cycle we consider as a 2 pi here. So, if we see that when it completes one burn cyclethat is one wavelength and this will be our phase. So, as you can see that the phase is, here is denoted here is 2 pi and 2 pi and this is the phase information. So, in SAR interferometry, what is done is a half the wavelength.
So, that half part of this is analysed, half the wavelength is basically multiplied with the number of fringes which we get, which we will be having a separate discussion. But I want to link this figure with SAR interferometry now also. So, which by which we get the phase differences recorded. Now, you see that the phi here is 2 pi that is whole divided by wavelength. And then we say that the 2R = 4 pi divided by wavelength multiplied by R. So in
that way we can have 4 pi here lambda and then R. So, this way also phi can be calculated that phi becomes equal to 4 pi 1 lambda by multiplied by R. So, this is of course, a sinusoidal signal and 2 way distance. So, this we reaches at this stage by 2R 2 pi shown here. And of course, half the wavelength, so half the wavelength we can also have the phase here. So due to almost purely sinusoidal nature of transmitted signal and this delay now is
equivalent to phase change that is phi between transmitted and receive signals and the phase change is thus proportional to 2 way travel distance that is 2R. That is why in previous equation we have used 2R. So, the 2R of radiation basically divided by transmitted wavelength. Because the range has to travel twice, that is why 2R is taken.In other words, the phase of SAR signal is a measure of just the last fraction of 2 way travel distance that is smaller than the transmitted wavelength. So, this is also important here, that
the SAR signal is major of just a last fraction of 2 way travel distance that is smaller than the transmit wavelength. In case of a real aperture radar. So, we compare here what basically the difference between real aperture and synthetic aperture. The real aperture radar uses the antenna of the maximum practical length to produce a narrow angular beam bit in the azimuth direction. One thing you have to always remember, while using the radar data or analyzing or interpreting the data that this data is a slant range data or
collected in oblique direction. And therefore, all these intricacies which I am discussing are all associated because of slant range or only direction and it are not there. So, because it has to collect the backscattering and how it will collect the backscattering, when it is interacted the pulse of signals or microwave which has interacted with the objects on the ground. Then, in order to collect the signal. You need either a big antenna or which can have which can produce the narrow angular beam width in the azimuth direction. And in a synthetic aperture radar that is a coherent side looking airborne system which uses the flight path of the aircraft to simulate an extremely
large antenna. So, using this you know side looking airborne system in a coherent manner. It simulates basically it does not have the large antenna in case of like in real aperture radar because having a large antenna on a spacecraft is very difficult.So, that is why this synthetic aperture has been caught and it is being used now, regularly. So, in this side looking airborne system or a spaceborne system, it coherently antenna is simulated. And it is a large antenna is simulated and are we also see aperture electronically
and that generates a high resolution remote sensing images. So, in real sense the spacecraft, thus it is not carrying a large antenna. But because of this side looking airborne system, it having a coherency there or a synchronization a large antenna can be simulated And therefore, it is possible to collect or generate high resolution high spatial resolution
remote sensing images using active microwave technique. So this is the difference, in real aperture radar, the real large antenna is taken by the spacecraft. Whereas in synthetic aperture radar, a normal sized antenna is taken, but a large antenna is synthesize you exploiting this side looking airborne or spaceborne system. So, the signal processing, basically when this backscattering is happening using magnitude And phase of the receive signals over successive pulses from elements of a synthetic aperture
radar. So, this is what happens that these 2 magnitude of the wave and the phase that is received over the successive pulses. Because the antenna or the spacecraft is continuously sending successive energy or pulses and whatever you are getting backscattered that is what it is recorded. So, this uses an enduring signal processing of the backscattered or received
signals. This magnitude and phase are recorded in a manner that we exploit. This SAR that is synthetic aperture. So, after a given number of cycles, as in earlier figure we have seen that the stored data is recombined, taking into account the Doppler effects inherent in different transmitted to the target geometry in each succeeding cycle to create a high resolution image
of the terrain being overflown. So, using the Doppler effect also the high resolution images can be generated using this synthetic aperture radar technology. So, currently most of the microwave remote sensing satellites uses this SAR technology.
Now, the one of the best example of this SAR technology which was used in a very special mission, which is called shuttle radar topographic mission. But in this one because we wanted or the mission wanted that to create a digital elevation model for almost entire globe except for polar regions. So this mission was launched in February 2000, only for 11 days to take the orbits, different orbits of the earth. And instead of SAR, a real aperture means real aperture technology was used. And the baseline difference was maintained of about see this 197 foot long mast, this was here, it was used this shuttle radar and with a 28 feet receiver, the biggest such object ever deployed in a space. The purpose here was that at the same time it terrain are a part of
the terrain, part of the ground should be looked with 2 different angles basically. And that is why you are seeing here, that one was a look from this angle and another one was looked from a different angle using this 197 feet long mast and that gave a fixed baseline. So, one angle was from here of course, this was also in slant range and another one was also there
197 feet away and this allowed to collect the data coherently and because if there is a fixed baseline. Therefore, it is it becomes very easy to create coherent images and if coherent images can be created. Then it becomes very easy to use such datasets to create digital elevation models. Because at
the same time with the 2 different slant ranges the data was being collected having a fixed baseline difference between the 2 slant ranges. And that allowed to create digital elevation model even in the real aperture chart setup for almost entire globe except for polar regions. And the data which was then analyzed and digital elevation model at different resolutions. We are created. Initially the digital elevation models created by this synthetic aperture and
this shuttle topographic mission or in short we say SRTM. Initially digital elevation model, we are launched at 1 kilometre resolution. Later on, on 90 meter resolution and then furtherprocessing allowed us to have Digital Elevation models from SRTM at 30 meter resolution for entire globe. And that give it completely new, and not only the insight in the microwave
remote sensing or active microwave remote sensing. But a lot of applications we are being developed using such a datasets that means the digital elevation models and one of the products which we see or use it. Use that is and this Google Earth in the background where when we get the elevation values from Google Earth. Those elevation values are coming from SRTM and digital elevation models, which are in the background or behind those satellite images. So SRTM first time could produce a global digital elevation model even up to 30 meter. Having this kind of arrangement, which was a 2 slant ranges, simultaneously data collected having a fixed baseline difference. Otherwise what happens if baselines changes, the entire strategy of processing inside interferometry will also change. And for individual scenes, this thing has to be created, whereas, in case of SRTM all the things we are fixed. So, therefore, it became very easy to develop digital elevation models. This is what happens exactly in the synthetic aperture radar that once the data is transmitted by a sensor towards the earth and then, as soon as the spacecraft aircraft moves whatever the backscattering is coming, it collects. And therefore, it simulates a large antenna in a space basically and that is why it is called synthetic aperture radar. So large antenna using the side looking airborne system, it is possible to create a large antenna. And data can be acquired. So, here again that synthetic aperture radar data collection is there, you know that transmission happens and then it this is the start of synthetic aperture radar data collection, this is the end of aperture data collection and it goes on continuously. For you know the earlier pulse, data has been sent and collected whereas for the next pulse, again data
has been sent and collected. Instead of having a large antenna on the spacecraft a because of the movement of the satellite and side looking movement allows us to have it is to simulated antenna and this is what it happens. So, in this is a synthetic aperture radar or in SAR systems are coherent. So, in this is a synthetic aperture radar or in SAR systems are coherent. See in whenever we want to use not power images, which also have got some applications which we will receive. But when we want to use them in an interferometry, like for example, creating digital elevation models orin chain detection study especially measuring or estimating grounded formations. Then coherence is very much required between 2 scenes.
Having the same side looking angle and slant range. But maybe on 2 different dates in order to detect the changes, but the coherence is required. So, that the rest of the changes are not there except the changes which might have caused because of some movement of the ground. And therefore, and this is none coherency can occur due to the climatic conditions, especially, maybe changes in the vegetation or in the built up areas and so on so forth. So, SAR systems are coherent that is capable of recording both amplitude and phase values. And inside interferometry, we basically exploit the phase differences. Therefore, a focus this SAR image is the complex value metrics and its amplitude is a map of microwave ground
reflectivity of the sense data or sense area and we call as the power image also. Such images which are representing the amplitude or reflectivity from the ground or backscattered. So, on the other hand what we can say that the SAR phase depends basically both on the local reflectivity from the objects and the sensor target distance. If sensor target distances is too large, then we may not get that kind of reflection, there will be some energy loss in between. Now, the phase data which is sensitive to sensor target distance is extremely high. And 2 way path difference of lambda that is wavelength is a single way part difference of 0.5 or half the lambda. So, if that if I say that a there is a like this ERS and RADARSAT and they are having wavelength of 5.6 centimetre then the half the wavelength would be 2.83centimetre. And that is true in case of ERS and RADARSAT. And that basically transfer translates into a full phase cycle of 2 pi. So, half the wavelength will be as one fringe in SAR images. So, a different fringes are used by different sensors. Example here of ERS and RADARSAT in which a band is having the wavelength is having 5.6 roughly. And therefore, this the phase difference of path difference would be half of the wavelength, and we end up with 2.83. So, one fringe in interferometry image would be equal to 2.93 centimetre. So if I am getting 10 fringes, which I can count or I can imply the system to count and these fringes, multiply by the half the wavelength that much of deformation, I can say very well that this much of deformation has occurred. So, number of fringes multiply by half of the wavelength in which the data has been acquired. I ERS and RADARSAT example is given here and that will give me the total deformation has taken place. That deformation might be due to an earthquake, might be due to an over exploitation of groundwater substance, in case of mining, nature landslides or any other reason, maybe there.
So, that is why very accurately and these are deformations can be measured, can be estimated using an active microwave remote sensing. So, this way brings to end of this discussion. In next discussion of course. We will be more focusing on SAR interferometry, how it works, what are the intricacies in SAR interferometry and then we will also see the applications. But before that we will be also looking these ERS images or SAR images which are power images and other than interferometry. And we will see that how these can be applied or use can be used for different applications. So this brings to end of this discussion. Thank you very much.