Mega May PDF Sale - NOW ON! 25% Off Digital Certs & Diplomas Ends in : : :

Claim My Discount!

Understanding Information and Statistical Inferences

Learn about the process of measuring and examining a random sample of the population using statistical methods.

Publisher: NPTEL
Have you ever wondered how to analyse and draw valuable conclusions from data? This course aims at answering this question by assessing the plausibility of a hypothesis using different datasets. You will study a statistical method that focuses on concepts and graphs to make decisions about population probability distributions. Investigate how a finding becomes statistically significant by enrolling in this course today.
Understanding Information and Statistical Inferences
  • Durata

    1.5-3 Oras
  • Students

  • Accreditation


Share This Course And
Earn Money  

Become an Affiliate Member





View course modules


This course aims to communicate the interplay between information theory and statistics by introducing the basic elements of statistical decision theory. Initially, it describes the procedure for estimating and testing the hypothesis using various statistical methods. First, you will discover the methods of casting detection problems using binary and ‘M-ary’ hypothesis testing methods. Next, you will study the practice of utilising data analysis to infer properties of an underlying distribution of probability. This process includes estimating predictions based on the best available information. Following this, we will teach several conventional approaches for testing the hypothesis. Then we demonstrate the procedure for ascertaining the statistical power of a hypothesis test using the Neyman-Pearson formulation. We examine the method of determining the best statistical model by assessing the goodness of fit of two competing models based on the ratio of their likelihoods.

Familiarise yourself with the procedure for determining the dissimilarity between two probability distributions. You will discover the significance of Kullback-Leibler Lemma divergence in ascertaining the distance of a given arbitrary distribution from the true distribution. Have you ever wondered how much information is revealed by a single coin toss? You will discover the method for building a heuristic model through Bayes’ Theorem for ascertaining the information disclosed through a coin toss. We illustrate the procedures of testing more than one hypothesis simultaneously. You will understand the potential of testing multiple hypotheses to discover the same or dependent datasets. We then demonstrate estimating the probability distribution parameters by maximising a likelihood function. Subsequently, you will study the measure of similarity between two labels of the same data. This will include computing the average information lost in a noisy channel to the probability of the categorisation error.

Finally, you will study how we use mutual information to measure the reduction in uncertainty about one random variable based on the knowledge of another. In addition, you will notice how Fano’s inequality gives a lower bound on the mutual information between two random variables that take values on an element set. Lastly, you will learn about a set of probabilistic theoretical tools to characterise the fundamental and asymptotic performance limits of binary and multiple hypothesis testing. The core components of this course are the statistical ways of testing the hypothesis and estimating the value of population distribution. ‘Understanding Information and Statistical Inferences’ is an informative course highlighting quantities such as entropy, mutual information, total variation distance and Kullback-Leibler lemma divergence. It also explains how these quantities play a role in critical communication, statistics and computer science problems. So why wait? Enrol and explore how we use information-theoretic methods to predict the performance in statistical decision theory.

Inizio Corso Ora