Loading

Cheers to the End of January with 25% Off ALL Certs & Diplomas!

Claim My Discount!

Understanding Information and Statistical Inferences

Learn about the process of measuring and examining a random sample of the population using statistical methods.

Publisher: NPTEL
Have you ever wondered how to analyse and draw valuable conclusions from data? This course aims at answering this question by assessing the plausibility of a hypothesis using different datasets. You will study a statistical method that focuses on concepts and graphs to make decisions about population probability distributions. Investigate how a finding becomes statistically significant by enrolling in this course today.
Understanding Information and Statistical Inferences
  • Duration

    1.5-3 Hours
  • Students

    84
  • Accreditation

    CPD

Description

Modules

Outcome

Certification

View course modules

Description

This course aims to communicate the interplay between information theory and statistics by introducing the basic elements of statistical decision theory. Initially, it describes the procedure for estimating and testing the hypothesis using various statistical methods. First, you will discover the ways of casting detection problems using binary and m-ary hypothesis testing methods. Next, you will study the practice of utilising data analysis to infer properties of an underlying distribution of probability. This process includes estimating predictions based on the best available information. Following this, we will teach several conventional approaches for testing the hypothesis. Then, we demonstrate the procedure for ascertaining the statistical power of a hypothesis test using Neyman-Pearson formulation. We examine the method of determining the best statistical model by assessing the goodness of fit of two competing models based on the ratio of their likelihoods.

Familiarise yourself with the procedure for determining the dissimilarity between two probability distributions. You will discover the significance of Kullback-Leibler divergence in ascertaining the distance of a given arbitrary distribution from the true distribution. Have you ever wondered how much information is revealed by a single coin toss? You will discover the method for building a heuristic model through Bayesian formulation for ascertaining the information disclosed through a coin toss. We illustrate the procedures of testing more than one hypothesis simultaneously. You will understand the potential of multiple hypotheses testing to discover the same or dependent datasets. We then demonstrate estimating the probability distribution parameters by maximising a likelihood function. Subsequently, you will study the measure of similarity between two labels of the same data. This will include computing the average information lost in a noisy channel to the probability of the categorisation error.

Finally, you will study how we use mutual information to measure the reduction in uncertainty about one random variable based on the knowledge of another. In addition, you will notice how Fano’s inequality gives a lower bound on the mutual information between two random variables that take values on an element set. Lastly, you will learn about a set of probabilistic theoretical tools to characterise the fundamental and asymptotic performance limits of binary and multiple hypothesis testing. The core components of this course are the statistical ways of testing the hypothesis and estimating the value of population distribution. “Understanding Information and Statistical Inferences” is an informative course highlighting quantities such as entropy, mutual information, total variation distance, and Kullback-Leibler divergence. It also explains how these quantities play a role in critical communication, statistics, and computer science problems. So, why wait? Enrol and explore how we use information-theoretic methods to predict the performance in statistical decision theory.

Start Course Now

Careers