Loading
Nota de Estudos
Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Virtual Reality
Prof. Steve Lavalle
Department of Multidisciplinary
Indian Institute of Technology, Madras

Lecture - 09
Human Vision (photoreceptors)
Hey, hello welcome back. Let us continue onward. In the last lectures we covered light some of the basic physical properties of propagation of light, and then got into optical systems. I explain various kinds of lenses, and what happens to objects at various distances explained real images, and virtual images.
And then we explain the eyes ability to form images on the retina, using it is lens to change the diopter of the eye. And so, I gave you several cases of that. And I want to now explain what it looks like when in a head mounted display you have a screen placed in front of the eye, with a convex lens in between. So, this is a very common situation and this is what you have in the lab. (Refer Slide Time: 01:07)

So, if we take the eye again I am drawing the same kind of pictures like I did last time. So, the retina is in the back here. I have the lens of the eye here. Suppose we have light coming in through parallel rays, and then it focuses on the on the retina in this particular example at the place called the fovea which is the place of highest visual acuity which is something that will cover today. As we go along today I will be explaining human vision, the biology of it some of the neuroscience some of the particular components that we have trying to get you to understand how visual perception happens in our brains. So, I want you to get an understanding of that because that is a critical part of engineering of V R systems overall, alright.
So, we have this and then I have a display, let us say here. So, this is a visual display if you put a display. Very close to your eyes, can you focus on it, right. So, if it is very, very close you will not be able to focus, because if you remember from last time, if you consider each one of these pixels as a point source of light the rays are going to be very much diverged right.
So, and remember the diopter will tell you if you have parallel rays, how far it will take before they converge back from the parallel case. If they are diverged, it will take a very, very powerful lens to do that the lens in your eye can compensate for some of that, but not all of it. So, if I take a very weak lens; so I just brought a week convex lens with me today, and if I want to go up and try to focus on some particular part on the board, I have to stop the lens about I put it you know very close to my eye and see how close I can get I have to stop about right here.
Um I tried with some students in the class a little bit before the class started and they could get it up quite a bit closer, because they are using this lens to further converge the rays are using their eye muscles to further converge I have lost about 30 percent of my ability to do that. So, I have to hold it back maybe in a year till have to hold it even further back. So what is going to work for me on more powerful lens right; so, it is going to do all of the work that this lens, used to do when I was younger and more so that it will work for everyone.
At least everyone who is able to focus on light coming in at a in parallel rays, and focus it onto the retina . You could adjust some lens that you put in between back and forth to cover different cases of nearsightedness and farsightedness. But what you cannot easily compensate for is astigmatism, which is one of the lens aberrations that we talked about last time and the human eye is subject to astigmatism the eye becomes ellipsoidal in shape in some way and then the focusing becomes asymmetric.
So, if you remember, there is a horizontal focal plane and a vertical focal plane, for example, and they are not the same when there is astigmatism. But you can at least by adjusting the lens location have some range of dioptres which makes it comparable for a very large range of people. So, I put a lens in the middle here. And I did not bring up powerful enough lens to really illustrate being able to go very close. You need a powerful if you to go out and buy a very powerful magnifying lens. little bit should be exactly right for this and you can do the experiment yourself.
So, this comes out of the lens, but the point I may be looking at here the pixel, that I may be looking at has very diverging rays if I draw this right here maybe a very diverging rays, but then they bend. Through, the lens and come out a parallel not quite drawing that, right, to put them the lens should be taking these diverging rays. And making them come out parallel. If they come out converging then you have a problem, right, they may come out converging and then no matter what you do with your lens you will you will see blur because they are converging short of the retina. So, you got to be careful with that if you go the other way if they are still diverging a little bit maybe your eye can compensate.
And so, that is that is how depending on your ability to change your lens, all right. questions about that. So, the retina is this part all around back here. I am going to go into the details of the retina, and the neurons that are very close to it. And then I will eventually cover the visual pathways that we as the signals go all the way back into the visual cortex which is back here under your skull alright. So, placed along the retina are what are called photoreceptors.
(Refer Slide Time: 06:43)

The photoreceptors, let me write it out photoreceptors, I like to think of these as the input pixels and if we think of engineering terminology.
So, the display has it is pixels on it right RGB pixels there are essentially input pixels on the retina whereas, the display is producing the output pixels. And there is some kind of interface going on here that involves a significant amount of optics, right? The eyes lens the cornea remember is doing the most amount of light bending and the engineered lens as well. So, all of this comes together.
There are a 2 kinds of photoreceptors. You may have seen this before rods, and cones. On regarding rods, we have about 120 million pry. So, and for cones we have far fewer only about 6 million. And the function of these different types considerably different rods are for low light, low light intensity, and the cones are for colour sensitivity. This separation of different types of photoreceptors has a profound impact on the way that we perceive brightness levels, colour, all sorts of things as we process visual information, and we get to the perception of vision. This fundamental separation; so, you may have noticed that if you are outside at night you are getting you are in a low light setting you cannot distinguish colours very well right.
So, it is one of the fundamental outcomes of this. Let me show you a picture of how these rods and cones are distributed around the retina, right. And notice that you know when the light comes in from say the bottom here, it hits this part of the retina right up at the top. When the light comes in from the top it hits the bottom part of the retina. So, in some sense the image is upside down, right? Why do not I look upside down to you right now? At the image on your retina is upside down yeah.
So, I mean you been your brain has learned to accept that, right? During your entire lifetime; so, it is considered normal there is no such transformation that has to be applied to it there is not like a some neurons that go and flip the image, do not think so. It is just what you have learned you may have heard of experiments where people put on prism glasses, that invert the images and then after some number of days or weeks they do not see the inversion anymore, everything looks fine again.
So, your brain can learn the orientation as being correct and it does not matter this is upside down the right side up is now I have a special piece of hardware you have this devoted to inverting it and correcting it, because it in some sense seats, it is consistent with what you have had your entire life, alright. Let us see, let me show the picture that is always know you. (Refer Slide Time: 10:26)

So, this shows the number of receptors per square millimetre. And 0 is right at the fovea, and that is the place where you have the greatest concentration of cones, and then as you get a degree or 2 off from that the cones start to get replaced by rods, and then the rod density increases and until you get about 15 degrees away or so to either side except for this strange anomaly, over here between 10 and 20 degrees which is the blind spot on the retina. And the reason why the blind spot is there is because of the connection to the optic nerve; which I will show the geometry of in just a little bit.
(Refer Slide Time: 11:16)

So, for these different types of photoreceptors that we have, the rods are responsive to light across these wavelengths shown in the dashed line here. So, centered at let us say 498 and of course, they will respond to an area around that, but with let us say lower and lower probability for an equivalent intensity of stimulus. And then there are for the cones 3 different kinds, this amazes me it is it is a RGB just like the way we design our monitors. So, in this place; so, we have a we have red cones,, green cones, and blue cones distributed around in some kind of your regular way along the retina.
So, let me just draw a little bit here of a picture as well. So, in the in the fovea at 0 degrees it is all cones, and they are very densely packed.
(Refer Slide Time: 12:11)

I am not drawing them as different colours, but there is also some kind of irregular arrangement of colours. So, these are quite small there the diameter is between 1 to 4 micrometers in diameter. What I think is interesting about that, is that if we think about wavelengths of visible light. So, let me put that squeeze that up here, wavelengths of visible light.
(Refer Slide Time: 12:56)

What did I say? It is between a 400 and 700 nanometers last time, but let us convert it to micrometers. So, it is 0.4 micrometers to 0.7 micrometers. So, using 10 to the minus 6th units instead of 10 to the minus ninth units and if we do that, then we see that at the very center of the fovea these things these cones pack in to the size of one micrometer; which is not very much larger than the wavelength of visible light which I find really incredible.
So, if you tried to make these any smaller, you would start to get very difficult kinds of interferences, right. Because with the waves I mean they would be much smaller than the actual wavelengths, and would not operate so well. So, so this seems to be about as small as you can make this and still have it function well, which I think is quite amazing that you know, the density of these again are down to roughly the size of the wavelengths of visible light.
So, quite small; so, these are as I said it is 0, it is all cones. Already when you get over 2 degrees off, then you are leaving the fovea. What happens there, is the cones already are getting bigger and rods start appearing among them. So, the cones are in the 4 micro meter to 10 micrometer range whereas the rods are down to 1 micrometer. So, they are small like the cones were and the cones are now getting larger and loosely interspersed with a lot of tightly packed rods and then by the time we get all the way over to 50 degrees.
(Refer Slide Time: 15:25)

It is almost entirely rods a few strong couple of cones in there. So, that suggests that when we are looking we look forward, when the fovea is fixated, we have a very high visual acuity in colour. And then as we look off to the side without rotating the eye rightful you look off to the side. So, that the image is over to the side of the retina towards the top or bottom from my horizontal pictures I look to the side right. So, if we if we are looking over to the side without rotating the eyes over here, then we start losing spatial resolution in terms of colour eventually the whole thing tapers off, as I showed in this picture, here eventually when we get 60 or 70 degrees away.
You can see that the density is going down significantly. So, you end up losing eventually everything right, but certainly your ability to distinguish colours out here, it is very weak if you believe you can see colours there it is because your brain is filling in information, that is not there right, and trying to speculate let us say. Any questions about this?

Virtual Reality
Prof. Steve Lavalle
Department of Multidisciplinary
Indian Institute of Technology, Madras

Lecture – 9-1
Human Vision (sufficient resolution for VR)
So, this is a very interesting question that arises and this is fundamental to design of virtual reality headsets. If I put a display in front of the eye like this how much resolution is enough right, how much resolution should this display be given that I know the photoreceptor density in from this plot and from these pictures I have made. We should be able to do some simple calculations and just try to estimate.
(Refer Slide Time: 00:22)

So, when I do that just to give an idea you see this around people in industry are talking about how much is enough I think it is quite difficult to say without doing the experiments. So, somebody has to manufacture high resolution displays higher than 1080p, I mean something like 2k by 2k per eye and then it may be 4k by 4k per eye and maybe 16k by 16k per eye and so forth and see where the limits are right. That is the kind of things that should be done.
(Refer Slide Time: 01:09)

So, how much resolution let us say I display is enough for VR?
(Refer Slide Time: 01:24)

Like I will leave my picture right here that I made.
(Refer Slide Time: 01:34)

Let us suppose I guess I am all the way off at 50 degrees from the picture I have remaining, but even if this were at the fovea let us say and we will return to the other picture. When if I had very low resolution in this optical system here, when the pixels think about one individual pixel there like a kind of square let us say, they do not really look like that, but let us suppose they are perfect squares and they get imaged on the retina somewhere.
So, if that is the case, if the resolution is low there will be this pixel that gets projected onto the retina and then there are a lot of photoreceptors to detect it right and as we increase the density of the photoreceptors here for close to the if were at the fovea let us say we may have a lot of photoreceptor. So, you perceive there is a square there right you seeing what is called the pixel structure. Now, it is not exactly a square because and you can do this after class if you like you can use the same magnifying lens, walk up to this screen over here and take a look at the sub pixels if you have never done that before the r g and b components are interlaced in some kind of way. So, they do not make; it is not exactly one r g b rectangle print. So, you have to go in and look at the even lower or even higher resolution which is even smaller contributing components to the images that we see.
Well, here is one thing I could do, I could I could make a rough estimate and say we have 126 million photoreceptors all right total because I said we had a what did I say a 100 million, no 120 million which one cones or rods, 120 million cones and about 6 million rods where as the other way around. 120 million rods and about 6 million cones because the cones are all concentrated as you can see from the other picture the cones are all concentrated around the fovea, but then rods is quite a lot of them and distribute over a much larger area. So, it makes sense that they are significantly rods.
(Refer Slide Time: 03:37)

Well, I could just take the square root of this and that is roughly equal to 11225 and if I imagine now why did I take the square root well let us just imagine that the retina if I were to unwrap it, it is really a spherical cap I am just imagining fun rolling it and flattening it out. So, if I were to do that it does not have a square shape, but I am just trying to make a very rough estimate here.
So, if I were to do that try and imagine what a rectangular screen should look like then perhaps it should be 11000 by 11000 roughly. If I wanted to have the total number of pixels that I present to an eye match the total number of photoreceptors, is that even a good idea I am not sure all right. So, we could adjust further and say well why do not I just take the area of highest visual acuity which is around here and I am because I am going to have the fovea let us say aimed at the place where I am looking most of the time. So, why do not I say that is going to be the place where I am going to be looking for pixels; so maybe I should use that right.
So, I could make a more careful calculation I could say the density at the fovea and I will even round up a bit I will say it is about 200,000 per millimetre squared. So, if I look at it that way turns out that the area of the retina is I looked this up before class this is 1094 millimetres squared of course, there must be some variations among humans, but 1000, roughly 1000 square millimetres. So, if I imagine that the retina had maximum density in all places well that is kind of a strange assumption why would I do that I will say why in a minute.
But if you imagine that this is the limiting case then if I take the square root of the 200 million that I get because if I had maximum density and I had it spread across let us say roughly a 1000 square millimetre, square millimetres, then this would be about 200 million photoreceptors. (Refer Slide Time: 06:14)

And if I take the square root of that, this is about 14000. So, a little bit bigger.
(Refer Slide Time: 06:25)

And what is interesting about that the reason why I tried to look at the case of imagining as if the fovea were propagated across the entire retina. In other words imagining at the fovea were so large it has that top density the highest density of photoreceptors everywhere is because this I can rotate right. So, you can rotate the eye and look at the top and bottom of the screen as you rotate. So, it effectively becomes like that right. If you trying to ask how high the resolution of the screen should be, so this is reasonable. So, maybe a reasonable upper bound may be a 16 let us say 16k by 16k display per eye should be sufficient, should be sufficient for not perceiving pixels.
Now, at this point you might ask why do not I just track which way the eye is looking, and then only present that in highly dense information exactly in the right place where it needs to be and do not worry about the rest of the image right. And that would save a lot of effort in computer graphics a lot of effort in trying to put out so many pixels across this entire display all these pixels have been will be rendered here on the off chance that your eye is looking at them, but it does not know where your eyes looking. So, it just has to render all of them. So, a great idea it is called foveated rendering, foveated rendering is to track the eye and then only draw high resolution images in the place where we know that the eye is looking where the fovea can perceive these areas of greatest concentration right. So, and that is all fine it is more expensive to do the eye tracking and it introduces latency into the pipeline.
So, there is tracking latency and then you have to do customized rendering for that. Maybe a few years down the road that will be feasible in the consumer space a consumer space of products and things, but for right now it is not effective enough at low cost and may not even be effective enough that at a very high cost, but its right on the kind of thresholds let us say.
Questions about this, now sometimes I look at this and I feel motivated to go even higher and say well maybe it should be 32k by 32k because I looked at the number of photoreceptors, but I did not take into account the fact that there is r g and b photoreceptors right. So, maybe I should imagine well I need to have enough to wait which way should I go in that case. Let us see I have, I do not have this density of r g and b photoreceptors, I actually have a lower if I just pick one of them of just red's I have a lower density of them right. And also when I look at my display it has some kind of pattern of r g and b components as well. So, I have not taken into account the patterns of r g and b here and the patterns of r g and b here right along the retina. So, I have not even taken that into account. If I take that into account with this estimate increase or decrease.
Student: Decrease.
Perhaps it would decrease yeah, it may decrease this may be sufficient let us say overkill and an interesting question is if I were to make a 4k by 4k display would that be enough would you ever be able to perceive anything, would there be any need right would you ever see pixels at 4k by 4k. And the honest answer right now is I do not know I have never experienced it before and I cannot say I feel fairly certain that at 16k by 16k per eye I would not perceive pixels but.
But you know the brain and the human vision system is often full of surprises. So, who knows, but it seems that this should be sufficient. Questions about that.
Student: Sir.
Yes.
Student: Do eyes have asymmetry, like the one on the screen? In what case.
Yes, there is some asymmetry I believe it corresponds to which eye this is and I am afraid to be quoted on this, but I believe you have you go further in this direction which I would guess for evolutionary reasons is that something may be coming from this side to eat you and it is better to see as far as possible whereas, your nose tends to block the side anyway and then its asymmetric you know it is the mirror image for the other eye. So, that is why that is why I believe it is asymmetric like that. Anyone else. To be I wanted it myself a few months ago and I looked it up I believe that is that is the answer there could be wrong, all right.
Let me say a little bit more about photoreceptors and then I want to start to get into the visual pathways let us say that lead from photoreceptors up to your visual cortex. I want to say a little bit more about photoreceptors as we go along here.


Virtual Reality
Prof. Steve Lavalle
Department of Multidisciplinary
Indian Institute of Technology, Madras

Lecture – 9-2
Human Vision (light intensity)
So, I think it is nice to look at several cases in terms of light intensity. (Refer Slide Time: 00:18)

Now, first of all I say when we talk about the intensity of light, one of the most natural kinds of measures might seem reasonable to us, especially for engineering people is a radiometric measure, which is based on energy, and what is instead used when we talk about perception of light, when we talk about light in the context of human vision, we use what are called photometric measures, which take into account, human sensitivity to light by wavelengths.
Which is exactly related to these photoreceptor, sensitivity plots that I showed you right. So, so there are measures, that that take this into account and that is what we will use a photometric measures, rather than a raw physics, that makes sense because it the it the visible spectrum is special only to us right, and many other animals but the visible spectrum for other animals changes quite a bit as well for example, there are some birds that have photoreceptors that can measure ultraviolet light, and then they end up with beautiful patterns on their wings that are in the ultraviolet spectrum, and they can see that. So, for them that is the visible spectrum.
So, if the birds were making measures, but these particular birds they would use something different, but this photometric units are based on a humans, and I will use just a common unit here of luminance called candelas.
(Refer Slide Time: 02:25)

Which is based on the light and roughly speaking is based on the light emanating from a candle. So, I will use candelas per meter squared as a unit of radiating light, and I just want to give some examples which these appear in the major textbook, and then lot of the surrounding concepts of what I am talking, about here also appear in there in a chapter 6.
(Refer Slide Time: 02:57)

Let us look at the, right here luminance, and how many photons receptor how many photons land in a single receptor, at a certain level of luminance, and I will just give some cases here a paper in star light as the weakest. So, were outside there is no moon in the sky, there are no clouds just stars and you hold out a piece of paper, and I assume you are not near the city or anything like that right. So, imagine is very, very dark you just have a piece of paper, this course found corresponds and of course, it is hard to reproduce us exactly.
But this corresponds roughly to I think this table is nice for comparative purposes, this corresponds roughly to 0.0003 candelas per meter squared, which in terms of photo photons hitting your photoreceptor, you will get about 0.01 per second. So, not very much just maybe barely above some noise threshold if you are lucky. So, that is the lowest end, and then we have paper in moonlight, this goes up to 0.2 and you get about 1 photon for receptor.
So, to illustrate the enormous range over which your photoreceptors seem to be useful, looking at a computer monitor this is about 63 candelas of course, it depends on a lot of factors, but then you get a 100 photons photoreceptor room light which of course, again there is variation here, but 3 1 6 and it is about a thousand-blue sky. So, looking up at the blue sky about 2500, sure it depends on where you are at in the world and finally, paper in direct sunlight. So, if you sit outside imagine here in Chennai sitting outside trying to read a book or something you know perfect white paper hitting the sunlight that is very, very bright.
(Refer Slide Time: 05:45)

So, paper in sunlight which gets up to a 100,000. So, that is quite a range when you look at it I like this this photon for receptor idea, this concept and so, I am going from 0.01 up to a 100,000 per receptor. So, at that point they get very, very saturated, I lived in Finland for a while and I think of a snow blindness too, as well or you have this remember we talked about the reflectance of snow last time. So, you can imagine snow in sunlight extremely bright, maybe a few of you haven’t seen snow in sunlight before I am not sure, let us see someday you will see that if you haven’t. So, one thing I want to talk about is, because of the way rods and cones are divided up and they have different functions, we end up with 2 different kinds of vision modes, one is called a scotopic vision, I am gonna write it on the other side of the board afraid I am running out of space over there. So, scotopic vision and the other one is called a photopic vision.
(Refer Slide Time: 07:00)

So, these are 2 different modes of vision operation, that we have 2 different kind of modes that are that our vision systems get into the dominant photoreceptor. So, if you want to add the word dominant, their dominant photoreceptors for scotopic vision are the rods, and the dominant photoreceptors for photopic vision are the cones, and for light levels typical light levels for scotopic vision, are less than 0.01 candelas per meter squared, and photopic is greater than 10 candelas per meter squared, of course there is some intermediate regions so, there is a gradual transition from one to the other in actuality brought down, but that the extremes are very clear what is going on.
So, as far as color perception goes, when you are in scotopic vision mode, it is monochromatic, and in photopic mode it is trichromatic, based on the r g b sensitivities of your a cones trichromatic, and there is the adaptation period in order to switch modes it takes about 35 minutes to go, fully into scotopic vision mode this seems reasonable right. So, if someone shines some bright lights or you been around the light for a long time, you go outside how long does it take before you can see really well in the dark, right may maybe after a few minutes you are already improving.
But if you want to get completely end a scotopic mode about half an hour, if you have ever done some work with telescopes trying to look at the stars at night, it takes quite a while before you can really see everything perfectly about half an hour, going in the other direction it is about 10 minutes to adjust, pupil dilation is also a significant part of this as well right so, when you are in a scotopic vision mode your pupils are dilated taking in as much light as possible. So, this is also another adjustment the eye is doing yet another degree of freedom in the optical system right, roughly speaking 90 percent of our neurons are devoted towards photopic vision so; that means, we are day time animals, maybe that is no surprise right were not old or bats or something like that. So, so were basically designed to survive to do the things we need to do to survive during daytime, and then were a lot more vulnerable at night right, but not completely vulnerable thanks to scotopic vision. So, so we have like 10 percent dedication to that.


Virtual Reality Engineering
Dr. M. Manivanan
Department of Biomedical Engineering
Indian Institute of Technology, Madras

Lecture – 39
Three Psychophysical Laws
Welcome back. Today, we are going to talk about how to measure experience. Virtual reality is all about experience. The whole course is about how to improve the experience in virtual reality. In the earlier classes we saw that this course we are going to learn how to improve the immersion effects and the interaction effects.