Loading

Alison's New App is now available on iOS and Android! Download Now

Data Compression Methods in Information Theory

Learn about the process of eliminating redundant bits to store data more efficiently with this free online course.

Publisher: NPTEL
Are you aware of the process of reducing a file size without losing the information? This coding course provides a foundation for encoding data using fewer bits than the actual representation. You will study the various kinds of data compression methods that optimize the performance of your storage capacity. Learn about the significance of re-encoding data by eliminating the bitrate for improving the memory space.
Data Compression Methods in Information Theory
  • Duration

    6-10 Hours
  • Students

    95
  • Accreditation

    CPD

Share This Course And
Earn Money  

Become an Affiliate Member

Description

Modules

Outcome

Certification

View course modules

Description

Data compression is a division of information theory that eliminates redundancy in data before transmission. The focus of this course is to explain various data compression techniques and algorithms used to reduce the size of a file. It begins by explaining the importance of different classes of codes used for encoding data. You will discover the procedure for mapping the source symbols to the number of bits without any error using a variable-length code mechanism. In addition, prefix-free codes are also explained. Subsequently, you will be taught about the nomenclature given by the Kraft–McMillan inequality regarding the use of codeword lengths for a prefix code. This includes the process of representing the prefix codes using binary tree and interval methods. Following this, you will study two common methods of data compression, namely lossy and lossless. The processes of reducing the size of data with and without losing its original form are described.

Next, the course illustrates the use of symbols and their measured estimations in building prefix-free codes. You will discover the role of the Shannon-Fano algorithm in allocating codes to the symbols depending upon their probabilities of incidence, and study the concept of optimal codes that represents the average word-length. You will explore the significance of Huffman code in encoding source symbols by assigning input strings to specific characters, including the process of achieving the best compression ratio using various coding methods. Subsequently, the bounding process of actual and optimal length using the application of information entropy is explained. This will comprise the fundamental postulates of the universal compression algorithms. You will discover the significance of universal codes in transmitting data efficiently from a likely set of complex databases, and see how minmax redundancy aspires to be the ultimate benchmark for codes.

Finally, the course explores the various techniques of executing compression using word frequencies. You will explore the role of frequency dictionaries and semantic networks in ascertaining the targeted lexicon and less frequent terms. Next, you will study the procedure for encoding files into decimal numbers using arithmetic codes. You will also look at the closeness of codewords generated by arithmetic codes to the optimal value for ensuring high compression rate. This will include the process of compressing data with minimal time for execution. Following this, you will be taught about the methods of compressing the data stored in rows and columns of a database, including the procedure for minimizing the storage space and enhancing performance speed using various encoding techniques. ‘Data Compression Methods in Information Theory’ is an illuminating course that explores the handling of compressing data from a real-world standpoint. Learn about the various algorithms used for reducing the size of different types of data by enrolling now and study for free.

Start Course Now

Careers