Statistics

Bivariate Discrete Cumulative Distribution Function

Bivariate Discrete Cumulative Distribution Function Data Science and A.I. Lecture Series Author: Bindeshwar Singh Kushwaha Institute: PostNetwork Academy Joint and Marginal Distribution Functions for Discrete Random Variables Two-Dimensional Joint Distribution Function The distribution function of the two-dimensional random variable \((X, Y)\) for all real \(x\) and \(y\) is defined as: \[ F(x,y) = P(X \leq […]

Bivariate Discrete Cumulative Distribution Function Read More »

Bivariate Discrete Random Variables Data Science and A.I. Lecture Series

Bivariate Discrete Random Variables Data Science and A.I. Lecture Series By Bindeshwar Singh Kushwaha, PostNetwork Academy Definition Let \( X \) and \( Y \) be two discrete random variables defined on the sample space \( S \) of a random experiment. Then, the function \( (X, Y) \) defined on the same sample space

Bivariate Discrete Random Variables Data Science and A.I. Lecture Series Read More »

Continuous Cumulative Distribution Function (CDF) | Probability & Statistics

  Definition: Continuous CDF A continuous random variable can take an infinite number of values in a given range. The Probability Density Function (PDF) \( f(x) \) describes the likelihood of \( X \) falling within a small interval. The Cumulative Distribution Function (CDF) is given by: \[ F(x) = P[X \leq x] = \int_{-\infty}^{x}

Continuous Cumulative Distribution Function (CDF) | Probability & Statistics Read More »

Central Limit Theorem (CLT) and Uniformly Minimum Variance Unbiased Estimator (UMVUE)

Central Limit Theorem (CLT) and Uniformly Minimum Variance Unbiased Estimator (UMVUE) By: Bindeshwar Singh Kushwaha Institute: PostNetwork Academy Question 1 Suppose \( X_1, X_2, \dots \) is an i.i.d. sequence of random variables with common variance \( \sigma^2 > 0 \). Define: \[ Y_n = \frac{1}{n} \sum_{i=1}^{n} X_{2i-1}, \quad Z_n = \frac{1}{n} \sum_{i=1}^{n} X_{2i} \]

Central Limit Theorem (CLT) and Uniformly Minimum Variance Unbiased Estimator (UMVUE) Read More »

Continuous Random Variable and Probability Density Function

  Continuous Random Variable and Probability Density Function Data Science and A.I. Lecture Series Continuous Random Variable and Probability Density Function A random variable is continuous if it can take any real value within a given range. Instead of probability mass function, we use probability density function (PDF), denoted by \( f(x) \). The probability

Continuous Random Variable and Probability Density Function Read More »

Some Questions Based on Discrete Probability Distributions

Some Questions Based on Discrete Probability Distributions Data Science and A.I. Lecture Series   Problem 1 2 bad articles are mixed with 5 good ones. Find the probability distribution of the number of bad articles if 2 articles are drawn at random. Let \( X \) be the number of bad articles drawn. Possible values:

Some Questions Based on Discrete Probability Distributions Read More »

Discrete Random Variable and Probability Mass Function

  Discrete Random Variable and Probability Mass Function Data Science and A.I. Lecture Series A random variable is said to be discrete if it has either a finite or a countable number of values. Countable values are those which can be arranged in a sequence, corresponding to natural numbers. Example: Number of students present each

Discrete Random Variable and Probability Mass Function Read More »

Random Variables and Probability Distributions

Random Variables and Probability Distributions Introduction to Random Variables In many experiments, we are interested in a numerical characteristic associated with outcomes of a random experiment. A random variable (RV) is a function that assigns a numerical value to each outcome of a random experiment. Example: Consider tossing a fair die twice and defining \(

Random Variables and Probability Distributions Read More »

Bayes’ Theorem and Examples | Data Science & AI

  Bayes’ Theorem and Examples Formula The formula for Bayes’ Theorem is given by: $$ P(E_i | A) = \frac{P(E_i) P(A | E_i)}{\sum_{j=1}^{n} P(E_j) P(A | E_j)} $$ Key Terminology \(E_i\) are hypotheses or possible causes. \(P(E_i)\) is the prior probability of \(E_i\). \(P(E_i | A)\) is the posterior probability of \(E_i\). The denominator ensures

Bayes’ Theorem and Examples | Data Science & AI Read More »

Addition and Multiplicative Laws Probability Explained

  Problems Using Both Addition and Multiplicative Laws Data Science and A.I. Lecture Series PostNetwork Academy Probability Laws The addition law of probability states: \[ P(A \cup B) = P(A) + P(B) – P(A \cap B) \] The multiplicative law of probability for independent events states: \[ P(A \cap B) = P(A) \cdot P(B) \]

Addition and Multiplicative Laws Probability Explained Read More »

©Postnetwork-All rights reserved.