Loading
Nota de Estudos
Study Reminders
Support
Text Version

Geometric Modeling of Alternate Worlds

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Virtual Reality
Prof. Steve Lavalle
Department of Applied Mechanics
Indian Institute of Technology, Madras

Lecture – 03
Geometry of Virtual Worlds (geometric modeling)
Hello, welcome to the next lecture. Last time I gave you an introduction to the course and I provided a Bird’s eye view, which was an overview of the hardware, software and sensation and perception parts of virtual reality. So, during the software part 1 of the key things, I mentioned was the alternate world generator. So today, I want to go into the basic mathematics, the kinds of modelling and transformations that we need to do to set up an alternate world generator. (Refer Slide Time: 00:49)

So, today’s lectures will be more fundamentally oriented I will say with Geometric Modelling of alternate worlds. So, the alternate world generator, in some sense has to simulate or reconstruct some kind of world. So, it may be simulating a virtual world. It has geometry, it has physics and the geometry and physics has to look reasonable to your brain, has to appear reasonable to your brain or to your perception when you use virtual reality; when you are in this kind of experience. So, we may be simulating a virtual environment or you may be capturing a real physical environment in some way, it could be far away or it could be recorded or both and you are transmitting that through this kind of head mounted display.
And that provides another example of virtual reality. So, we have also let W be a 3D world. (Refer Slide Time: 02:02)

So, I mean that W is some subset of R 3. So, I will just start doing mathematics in Cartesian coordinates on the Reals in 3 dimensions. I mean define a co-ordinate system. So, we will have X going horizontally, we will have a Y axis going upward and we have Z coming out of the board done in a way. So, that we have a right handed co-ordinate system.

Each point in ?3,(?, ?, ?) ? ?
So, I will prefer and use right handed coordinate systems all the way through in graphics and rendering it is sometimes preferable to use left handed coordinate systems and in direct X, if you are doing programming in that its left handed if you are doing programming in open gla lot of other systems it is right handed, but most of math and physics is right handed oriented. So, I want to stay in math in that way; also I will follow the conventions used in the in the game industry and throughout much of graphics which is that the Y axis is up throughout most of engineering.
I think the Z axis is up, but we are going to stay consistent with that. So, that whatever you apply from game engines and related things it ends up being consistent. So, this is a 3 dimensional world and so, each point in this world has co-ordinates x, y, z element of W. So, I would just have I will be describing points with x, y, z co-ordinates.
(Refer Slide Time: 03:54)

So, the world has two kinds of models. The first main distinction I want to make here, is 1, we have Stationary models. So, this means as the name implies they never move. So, this can correspond to buildings, walls, trees assuming the trees are not swaying in the wind, so, so, so. Think about things that when you put them into the world, they remain fixed all the time. That is the simplest case; in this case they are described in the world co-ordinate frame.
(Refer Slide Time: 04:50)

And so, we can just make up these x, y, z co-ordinates for whatever points we want to describe for the stationary models and leave them alone the whole time. One thing I should say, if you are building a virtual world, I highly recommend scaling these axes in a way that corresponds perfectly to the world you are making.
So, I highly recommend for example, if you use the Metric system; then, measure out everything in meters or centimetres whatever it is that you would like to use. So that you can always remember what the scale correspondence is between your virtual world and the real world. Unless you are doing interesting VR experiments where you want to change your scale entirely which is interesting by itself. I would rather do it as an another transformation that you apply to this. It is very nice and helpful to keep the numbering in here, let us say consistent with measurements in the real world.
So, that if you draw a chair for example, the chair ends up being the right size, if you are trying to capture a chair from the physical world and bring it into the virtual world. You do not have to study it, look at it you end up with problems of perception of scale and it gets very difficult. So, it is nice to keep these coordinates like that, match them as they would be in the real world. So 2, the other kinds of model inside of the world are Moveable models and in this case these models have a space of possible transformations, space of possible transformations, you write that down here. (Refer Slide Time: 06:47)

So, this may correspond to many moving things, it could be the avatars representations of yourself, anything else moving around other characters, animals, moving bullets, blowing leaves, all sorts of things and usually these are composed of rigid bodies . So, imagine having a bunch of rigid bodies attached together. So, Usually, composed of rigid bodies each of which is defined in its own body frame.
So, we will not directly use the world co-ordinate frame, but each body will have it is own frame and then, we will imagine a moveable model as being a collection of 1 or more rigid bodies. So, they could be attached together just like a Human Skeleton let us say, where we have each rigid body may correspond to a particular link and the links are all joined together by joints and we end up with some kind of movable structure.
It could just be a single rigid body that simply moves around through the space maybe you have a chair that can be moved around. So, it is rigid by itself, but it is a movable body; the character may be able to come up grab the chair and move it somewhere.
So in these cases, we have to talk about a space of possible transformations that can be applied, in order to do that we start off by describing it in the body frame.
(Refer Slide Time: 08:44)

So, we generally have two Modelling choices; when we are making geometric models of these bodies that are in the world, whether they are stationary or moveable. (Refer Slide Time: 09:00)

Whether, they are stationary or movable. 1, choice is to have what is called a Solid representation and the 2nd one, is what’s called a Boundary representation.
So, in the case of Solid representation, we have three dimensional primitives and so, the most basic description unit we have represents some 3 dimensional chunk of the world or of the body. And in the Boundary representation case, we have 2D primitives. And there are lots of details to this, lots of interesting technical complications and a lot of trouble comes about based on which representation you choose, you solve some problems and then other problems emerge.
And I will give you a little bit of a hint to that of that today, but I will not be covering it in full detail. So, one thing to think about with regard to this is think about obstacles and collision detection. In the real world when I walk up to this table my motion is physically obstructed. So, that is one kind of obstacle. If I put this notebook in front of me, I also have some visual obstruction as well and you could have 2 these could be independent right.
I could have some fogg that does not obstruct my motion, but it just obstructs my visibility. I could have what’s an example of an obstacle that obstructs my motion, but not visibility.
Student: Glass
Glass, very good. So, I can look through glass and I get obstructed may be surprised by that. So, those are 2 different kinds of obstructions you could model these things and represent them. I have not talked about rendering or graphics yet; we will have to get to that part, when we talk about drawing the world in a way and presenting it drawing it in a way that is appropriate for your vision sense, we have not got to that yet, we are just describing the geometry of the world.
(Refer Slide Time: 11:34)

So, with regard to these 2 choices having 1, body let us say represented like this and I have another body that is here. In this case I would say they are in collision because they intersect right there is a non empty intersection. Now if I have a some kind of solid representation, I am only drawing this in two dimensions. Hence the solid representation looks 2D and a boundary representation should look 1D.
So, if I just think about representing these by their boundaries just the outer perimeter of the yellow blob and the white blob. Then these also intersect at these points here. So, if I were going to do some kind of intersection test, I could look only at the boundary maybe I break this into tiny line segments and I do line segment tests or maybe I get a little more clever with the algebra, I realize these are circles.
So, I could do some circle intersection tests, but what happens when I have this case? right So, in this case if I am using a solid representation, I may be able to tell very quickly depending on how I have decided to represent it; that the yellow disc is in fact, in the interior of the white disc.
But if I have a boundary representation, there is no intersection of the boundaries. And so, it is just something very important to think about there is some kind of meaning behind these representations.
(Refer Slide Time: 13:26)

And it is somewhat implicit and then later as you start developing you realize some of the shortcomings and problems. Let me give you an example it gets a little bit worse. It may be that I have a boundary representation and it has mistakes in it a little bit.
So, this is the boundary representation that I have, of one object or body and well maybe the other one is fine and looks like that. So, that is the bound representation of that one and I am still trying to infer whether or not this yellow disc is inside or outside of the white disc.
And it may be that I have a perfectly defined disc here, but out here because I have these breaks, I am not even sure if there is a well defined notion of inside or outside. What does it mean to be inside or outside of this? How can you define that? If this is a closed curve then, by the Jordan Curve Theorem, there is an inside and an outside and that is all there is it very nicely partitions of space into 2 regions. But when there is this break in here, it is not very well defined and you can imagine that the same thing happens in 3 dimensions and it is much worse and you have to think about where does your model come from?
If you are drawing the model yourself maybe using a design tool, maybe using Maya or blender and you are making your own models perhaps you can get everything very consistent; this is called Model Coherency where this particular example I have given has a lack of coherency, you meant this to be a closed curve, but it ends up getting broken.
So, you have to ask yourself what is the coherency or perhaps consistency of the model; if you are extracting this model using depth cameras right maybe you have computer vision algorithms running, you are also getting depth information, you are trying to build some kind of surface description. What do you think? The chances are that there is going to be little holes in your data. And that is going to make a mess.
So, you may have to do a lot of work to build a coherent model you may have even more work perhaps to build a nice solid representation of this, but then when you do that then it will be easier to determine collisions. So, these are just some of the kind of difficulties you end up with also because it is difficult to know what side you are on of these boundary pieces; it gets sometimes difficult to render surfaces in computer graphics because you are not sure whether the surface normal that you are looking at is pointing outside or inside of the object to be renders a lot of difficulties with that.
If you do some rendering on your own it often mistakes come up because the surface normals are pointing in the wrong direction. So, these are some different choices at the highest level. I am going to only work with Boundary representations even though I think Solid representations are cleaner in a lot of ways. One of the reasons, why we should talk about boundary representations here is because that is what is done in primarily in game engines, all the way down to GPUs.
So, the graphical processing units that are that are in our computers are geared for boundary representations and in particular representations that are made of 3D triangles. So, why do not we pick the simplest case boundary representations and the 2 dimensional primitives are going to be triangles that sit in 3 dimensional space.
So, I call them 2D because it is a 2 dimensional triangle; it is not a 3 dimensional simplex or pyramid of some kind that is sitting in space; it is a 2 dimensional surface that sits in 3 dimensional space.
(Refer Slide Time: 16:43)

So, it is a 2D primitive. So, let us use a triangle primitives. So, in order to describe a triangle, we may mean well we generally mean this 2 dimensional surface. It is assumed to be linear as part of a plane.
And so, all you need to define are the vertices of the triangle :

So, I can start defining triangles like this. If I put them into the world, I define them in the world co-ordinate frame for stationary models and again, if they are movable models; then, we define them in the body frame and then I am going to say what to do later on how to transform those. There is also some common tricks for gluing triangles together in order to save space and to make more efficient data structures to access all of the triangles in a quick manner.
(Refer Slide Time: 18:10)

And one thing that may happen is the triangles may be glued together to form strips. So, you could have a strip of triangles and do operations on that or more generally you may find the triangles are put together in some kind of mesh and there are all sorts of interesting issues in terms of what kind of property should we maintain in that mesh in order to make the computations that we perform later.
If we are studying or let us say solving problems of rendering to determine what the lighting should be across the surface, we would like to have not too many triangles, we would also like to represent the surface very well. If it has a lot of curvature we might need to make the triangle smaller, we would also like to avoid having very very thin triangles. If we captured a surface from a depth camera, how do we build a nice representation of this and there are a lot of algorithms from the field of computational geometry.
You can study these people are very interested in starting from raw data and then building nice surface representations like this Boundary representations; especially, not having incorrect holes in the data. So, when you put these together, when you make strips or mesh if the model is coherent; then, many of the vertices are shared. So, that reduces the overall amount of data storage and there are very clearly defined adjacencies here between triangles. And so, you can make data structures with pointers that move around.
One example is a doubly connected edge list which computational geometry I developed or you can very quickly move across and propagate computations across these surfaces that you might need to do. So, so, that is about all, I will say about basic modelling and primitives are there any questions about that.

Virtual Reality
Prof. Steve Lavalle
Department of Applied Mechanics
Indian Institute of Technology, Madras

Lecture – 3–1
Geometry of Virtual Worlds (Transforming Models)
I want to now switch to transforming models. So, I want to talk about transforming rigid bodies.
So, I will go over and some changing topics a bit, I will erase here. (Refer Slide Time: 00:26)

Transforming rigid bodies; probably all of you have had some experience in performing transformations to bodies. You may have done translations and rotations before in other subjects that you have studied. What might not be as familiar is thinking about the space of all transformations that could be applied and representing those in some nice way or performing estimation designing filters on the space of all transformations. So, that is considerably more complex and my goal is to make that hopefully simple and clear today. So, I want to try to explain some of these things about the space of all transformations.
In terms of why transform, I want to make some connection back to the last lecture that we have. So, why are we performing transformations? Well, it should be clear in the case of moveable models I just talked about that. But, there is another case which I think connected very nicely to our lectures from last time which is perception of stationarity. So, that is somewhat counterintuitive. So, I need to perform transformations because I need to unmoved something that has moved. Now, here is the idea. So, I see all of you today in the real world when I rotate my head you all move appropriately, in the way that I am familiar with all right with respect to my head you are essentially counter rotating in the opposite direction. So, if I design a virtual reality system suppose it is a cave like system or it is a surround sound audio system it is fixed to the world. So, that when I turn my head the stimulus is counter rotating appropriately.
Now, if I mount the stimulus on my head if I wear ear phones or eye phones I have a head mounted display let us say, now, when I rotate my head the stimulus is unfortunately going to follow it. So, if I want to simulate stationarity, if I want to simulate the stimulus being fixed to the outside world I have to counter rotate. Does that make sense? So, it confuses people a lot. So, you end up applying an inverse transformation to simulate stationarity. We talked about the vestibulo-ocular reflex going back and forth like this where that is another kind of perception of stationarity where your body is designed to have your eyes counter rotate with respect to your head.
Now, we are designing a virtual reality system that is mounted. So, when you mount a stimulus to your body you have to compensate for the transformations that your body is performing. You have to undo those and take that into account. Hopefully do it with very low latency. So, that your brain does not become confused right it becomes unfamiliar input then.
So, we need to counter rotate the mounted stimulus more generally more general than counter rotate is inverse transform and because we have to do that this requires tracking or filtering estimation whatever you like to call it, we need to use sensors and estimate the motion that has occurred of the human body and then when we figure out what that transformation was we apply the inverse of it to compensate to give you again this great idea the perception of stationarity, that makes sense. So, those are 2 reasons why we are studying transforms today. The first one is somewhat obvious, models are moving in the world. But, the second one is little bit less obvious is to give you the perception of stationarity to make you think that the outside world is in fact, not moving. Of course, both of these could be happening together making a big complicated mess all right.
(Refer Slide Time: 04:48)

So, under transforming rigid bodies, let us imagine transforming each vertex of each triangle simple as that. There is going to be 3 cases that I cover today; 3 cases, we have the easy case which is translation and in that case I am going go over and I am going to talk about the number of degrees of freedom. So, the DOFs in 2D and 3D, for translation only we have 2 degrees of freedom in 2D and we have 3 degrees of freedom in 3D and this is really mathematically the easiest case to define to understand, no problems really. The harder case is going to be rotation which in 2D is very easy.
How many degrees of freedom do we have for 2D rotation?
Student: 1.
Just 1 and in 3 dimensions we have 3 and this is the part where it gets harder. So, things get a little more complicated. It is not hard to apply a rotation matrix and see the result, but it is a lot harder to reason about the space of all 3D rotations. So, it ends up getting considerably complicated I hope to convince you of that and show you some of the kinds of problems and then help teach you to step around the problems and do things in a nice way so that you do not end up with a unfortunate difficulties later. And, then the hardest case which is not too much harder than the 3D rotation is we put them together. So, you get rotation followed by translation. So, once I have that then I can put a rigid body into any configuration that I like. I can translate it and I can rotate it. So, I can place it anywhere into the world without distorting the body.
That will be a very important property that is why we write the word rigid here it means that the body itself does not get distorted it just gets translated and rotated, does not mean it is rigidly attached to the ground. It just means the body itself is internally rigid. So, all distances between pairs of points remain preserved and so forth, there is more than that, but I will get to it soon. (Refer Slide Time: 07:28)
.
So, all we do is add the degrees of freedom here. So, we get 3 degrees of freedom in 2D and 6 degrees of freedom per rigid body in 3D for 3 dimensional models. So, that is the case we are going to be handling of course, most often because a probably 2 dimensional virtual reality is not very interesting and to think about that.
(Refer Slide Time: 08:05)

So, let us start off with the very easy case just for completeness and consistency even though this one should be so simple that you might be a bit bored. So, translation; suppose, we just want to essentially shift the triangle by some amount I will write it as X sub t for translation, Y sub t for translation and Z sub t for translation. So, we want to shift it by some amount. So, we just take the coordinates of the triangle. I do not think I want to write out all 3 coordinates, let us just pick one of the coordinates here and we will call it ??, ??, ??and so, if we want to take this point and translate it then we just apply the following transformation we take ??, ??, ?? and transform it to ?? + ??, ?? + ??, ?? + ?? again very simple we are just performing some translation to it and of course, here i mean i could be 1, 2 or 3, if we just number these as 3 different points ?1, ?1, ?1, ?2, ?2, ?2 and ?3, ?3, ?3 right. So, that is the easy case, everybody agrees that is very simple. You should have done that somewhere before.
So, if that is the case then the amount of translation that is performed when we move the triangle somewhere else suppose this triangle gets moved over here, after performing the transformation then the amount of displacement for example, along the x-direction here would be ??. So, that is all we are doing here just translating it along. Questions about that, not very much here to talk about.


Virtual Reality
Prof. Steve Lavalle
Department of Applied Mechanics
Indian Institute of Technology, Madras

Lecture – 3–1
Geometry of Virtual Worlds (Transforming Models)
I want to now switch to transforming models. So, I want to talk about transforming rigid bodies.
So, I will go over and some changing topics a bit, I will erase here. (Refer Slide Time: 00:26)

Transforming rigid bodies; probably all of you have had some experience in performing transformations to bodies. You may have done translations and rotations before in other subjects that you have studied. What might not be as familiar is thinking about the space of all transformations that could be applied and representing those in some nice way or performing estimation designing filters on the space of all transformations. So, that is considerably more complex and my goal is to make that hopefully simple and clear today. So, I want to try to explain some of these things about the space of all transformations.
In terms of why transform, I want to make some connection back to the last lecture that we have. So, why are we performing transformations? Well, it should be clear in the case of moveable models I just talked about that. But, there is another case which I think connected very nicely to our lectures from last time which is perception of stationarity. So, that is somewhat counterintuitive. So, I need to perform transformations because I need to unmoved something that has moved. Now, here is the idea. So, I see all of you today in the real world when I rotate my head you all move appropriately, in the way that I am familiar with all right with respect to my head you are essentially counter rotating in the opposite direction. So, if I design a virtual reality system suppose it is a cave like system or it is a surround sound audio system it is fixed to the world. So, that when I turn my head the stimulus is counter rotating appropriately.
Now, if I mount the stimulus on my head if I wear ear phones or eye phones I have a head mounted display let us say, now, when I rotate my head the stimulus is unfortunately going to follow it. So, if I want to simulate stationarity, if I want to simulate the stimulus being fixed to the outside world I have to counter rotate. Does that make sense? So, it confuses people a lot. So, you end up applying an inverse transformation to simulate stationarity. We talked about the vestibulo-ocular reflex going back and forth like this where that is another kind of perception of stationarity where your body is designed to have your eyes counter rotate with respect to your head.
Now, we are designing a virtual reality system that is mounted. So, when you mount a stimulus to your body you have to compensate for the transformations that your body is performing. You have to undo those and take that into account. Hopefully do it with very low latency. So, that your brain does not become confused right it becomes unfamiliar input then.
So, we need to counter rotate the mounted stimulus more generally more general than counter rotate is inverse transform and because we have to do that this requires tracking or filtering estimation whatever you like to call it, we need to use sensors and estimate the motion that has occurred of the human body and then when we figure out what that transformation was we apply the inverse of it to compensate to give you again this great idea the perception of stationarity, that makes sense. So, those are 2 reasons why we are studying transforms today. The first one is somewhat obvious, models are moving in the world. But, the second one is little bit less obvious is to give you the perception of stationarity to make you think that the outside world is in fact, not moving. Of course, both of these could be happening together making a big complicated mess all right.
(Refer Slide Time: 04:48)

So, under transforming rigid bodies, let us imagine transforming each vertex of each triangle simple as that. There is going to be 3 cases that I cover today; 3 cases, we have the easy case which is translation and in that case I am going go over and I am going to talk about the number of degrees of freedom. So, the DOFs in 2D and 3D, for translation only we have 2 degrees of freedom in 2D and we have 3 degrees of freedom in 3D and this is really mathematically the easiest case to define to understand, no problems really. The harder case is going to be rotation which in 2D is very easy.
How many degrees of freedom do we have for 2D rotation?
Student: 1.
Just 1 and in 3 dimensions we have 3 and this is the part where it gets harder. So, things get a little more complicated. It is not hard to apply a rotation matrix and see the result, but it is a lot harder to reason about the space of all 3D rotations. So, it ends up getting considerably complicated I hope to convince you of that and show you some of the kinds of problems and then help teach you to step around the problems and do things in a nice way so that you do not end up with a unfortunate difficulties later. And, then the hardest case which is not too much harder than the 3D rotation is we put them together. So, you get rotation followed by translation. So, once I have that then I can put a rigid body into any configuration that I like. I can translate it and I can rotate it. So, I can place it anywhere into the world without distorting the body.
That will be a very important property that is why we write the word rigid here it means that the body itself does not get distorted it just gets translated and rotated, does not mean it is rigidly attached to the ground. It just means the body itself is internally rigid. So, all distances between pairs of points remain preserved and so forth, there is more than that, but I will get to it soon. (Refer Slide Time: 07:28)
.
So, all we do is add the degrees of freedom here. So, we get 3 degrees of freedom in 2D and 6 degrees of freedom per rigid body in 3D for 3 dimensional models. So, that is the case we are going to be handling of course, most often because a probably 2 dimensional virtual reality is not very interesting and to think about that.
(Refer Slide Time: 08:05)