Virtual Reality Tracking and Rendering

Learn about the purpose of a VR tracking system and the concept of visual rendering in this free online course.

Publisher: NPTEL
This free online course will examine positional tracking VR technology and the benefits of the VR experience. You will be shown the concept of visual rendering, which specifies what the visual display should show through an interface to the virtual world generator (VWG). You will learn about orientation tracking and how you can integrate sensor readings to estimate orientation and also rendering for captured, rather than synthetic virtual worlds.
Virtual Reality Tracking and Rendering
  • Duration

    4-5 Hours
  • Students

  • Accreditation






View course modules


Keeping track of motion in the physical world is a crucial part of any VR system. Tracking was one of the largest obstacles to bringing VR headsets into consumer electronics and it will remain a major challenge due to our desire to expand and improve VR experiences. This course will, however, cover the easy case of tracking rotations around a single axis to prepare for 3D orientation which extends the framework to tracking the 3-Degree of freedom (DOF) orientation of a 3D rigid body. Highly accurate tracking methods have been mostly enabled by commodity hardware components, such as inertial measurement units (IMUs) and cameras, that have plummeted in size and cost due to the smartphone industry. This course will teach you about the tracking of position and orientation together, which in most systems requires line-of-sight visibility between a fixed part of the physical world and the object being tracked. This free online course will also discuss the idea of tracking multiple bodies that are attached together by joints.

Next, you will be shown the principle of using sensors to build a representation of the physical world so that it can be brought into the virtual world. You will learn about the most powerful paradigm for 6-DOF tracking. You will be taught primary visibility algorithms based on ray casting that provides real time performance and a feature set well suited for rendering virtual reality. Also, this course goes on to show the main difference between the two major families of rendering methods and how visibility is handled. With the widespread use of AR/VR head-mounted displays, there is an increasing demand for pinhole cameras to test their image quality. However, it has been a challenge to design a pinhole camera with a wide field of view (FOV), high image performance and low distortion while maintaining compact and lightweight. Furthermore, you will be shown the pinhole camera model and how tracking is done using a camera.

Finally, you will learn about visual rendering and what the visual display should show through an interface to the virtual world generator (VWG). This course goes on to examine how to prepare models to be rendered, then shows how to use integrated rendering programs to create graphic images. More so, the course will cover the basic concepts of ray tracing and rasterization; these are considered the core of computer graphics, but VR-specific issues also arise. They mainly address the case of rendering for virtual worlds that are formed synthetically. Also, this course goes on to explain latency reduction, which is critical to VR, so that virtual objects appear in the right place at the right time and addresses VR-specific problems that arise from imperfections in the optical system. This course is important for learners who want to gain an in-depth understanding of tracking and the various benefits it brings to VR experience. Also, learners will gain an in-depth understanding of the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model. Also, background knowledge in math, physics and computer science would be useful and is required beforehand before coming into this course.

Start Course Now