Mega March Sale! 😍 25% off Digital Certs & DiplomasEnds in  : : :

Claim Your Discount!
    Study Reminders
    Support

    Computer Vision
    Computer Vision is portrayed as a discipline that attempts to create strategies to help computers visualize and comprehend the content of digital images such as photographs and videos.
    Historical Timeline of Computer Vision
    1959 – The first digital image scanner was invented by transforming images into grids of numbers.
    1963 – Larry Roberts described the process of deriving 3D info about solid objects from 2D photographs.
    1966 – Marvin Minksy instructed a graduate student to connect a camera to a computer and have it described what it sees.
    1970s - Quantitative approaches to computer vision were developed at the time, including the first of many feature-based stereo correspondence algorithms.
    1980s – A lot of attention was focused on more sophisticated mathematical techniques for performing quantitative image and scene analysis.
    1990s – The most notable development in computer vision during this decade was the increased interaction with computer graphics especially in the cross-disciplinary area of image-based modeling and rendering.
    2000s - The final trend, which now dominates a lot of the visual recognition research in our community, is the application of sophisticated machine learning techniques to computer vision problems.
    Image Formation
    An image is formed when the light reflected from an object is captured through optics onto a sensor plane.The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed.
    Image Processing
    Image Processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it.
    The following are the various operators of Image Processing: Point Operators, Local Operators and Global Operators.
    Image Filtering
    Image Filtering involves the application of window operations that perform useful functions, such as noise removal and image enhancement.
    Fourier Transform
    The Fourier Transform is an important image processing tool that is used to decompose an image into its sine and cosine components. It is used in a wide range of applications, such as image analysis, image filtering, image reconstruction and image compression.
    Interpolation and Decimation
    The process of increasing the sampling rate is called Interpolation and the process of decreasing the sampling rate is called Decimation. In other words, Interpolation can be used to increase the resolution of an image and decimation is required to reduce the resolution.