Information Theory Lower Bounds
Information-Theoretic Lower Bounds are derived for the purpose of minimizing the time required by any algorithm for distributed function computation over a network of point-to-point channels with finite capacity.
This course introduces:
• Lower bound for compression problem, derived by using Chebychev's inequality and Fano's inequality.
• Shannon's source coding theorem establishes the limits to possible data compression and the operational meaning of the Shannon entropy.
• Lower bound for random number generation, which relies on the continuity of entropy.
• Lower bound for hypothesis testing, derived by using data processing inequality and the log-likelihood ratio test.
• Two different proof of strong converse for source coding. The first proof looming into the problem and trying to figure out what is the cardinality of the large probabilities, while the second proof uses Fano's inequality.
• Minmax lower bound for statistical estimation, derived by using Markov inequality and Fano's inequality.
Log in to save your progress and obtain a certificate in Alison’s free Lower Bounds Using Information Theory Tools online course
Sign up to save your progress and obtain a certificate in Alison’s free Lower Bounds Using Information Theory Tools online course
Please enter you email address and we will mail you a link to reset your password.