Study notes for Discrete Probability Distribution
2013-07-08 17:07
441 查看
The Basics of Probability
Probability measures the amount of uncertainty of an event: a fact whose occurence is uncertain.Sample space refers to the set of all possible events, denoted as
.
Some properties:
Sum rule:
Union bound:
Conditional probability:
. To emphasize that p(A) is unconditional, p(A) is called "marginal probability", and p(B, A) is called "joint probability", where
p(A, B)=p(B|A) p(A) is called the "multiplication rule" or "factorization rule".
Total probability theorem: p(B) = p(B|A)p(A) + p(B|~A)p(~A)
Bayes' Theorem:
Bayes' Theorem can be regarded as a rule to update a prior probability p(A) into a posterior probability p(A|B), taking into account the amount/occurrence of evidence/event B.
Conditional independence: Two events A and B, with p(A)>0 and p(B)>0 are independent, given C, if p(A, B|C)=p(A|C) p(B|C).
Probability mass function (p.m.f) of random variable X is a function
Joint probability mass function of X and Y is a function
Cumulative distribution function (c.d.f) of a random variable X is a function:
The c.d.f describes the probability in a specific interval, whereas the p.m.f describes the probability in a specific event.
Expectation: the expectationof a random variable X is:
linearity: E[aX+bY]=aE[x]+bE[Y]
if X and Y are independent: E[XY]=E[X]*E[Y]
Markov's inequality: let X be a nonnegative random variable with
, then for all
Variance: the variance of a random variable X is:
, where
is called the standard deviation
of the random variable X.
Var[aX] = a2Var[X]
if X and Y are independent, Var[X+Y]=Var[X]+Var[Y]
Chebyshev's inequality: let X be a random variable
, then for all
Bernoulli Distribution
A (single) Bernoulli trial is an experiment whose outcome is random and can be either of two possible outcomes, "success" and "failure", or "yes" and "no". Examples of Bernoulli trials include: flipping a coin, political option poll, etc.The Bernoulli distribution is a discrete probability distribution ofone (a) discrete random variable X, which takes value 1 with success probability p: Pr(X=1)=p, and value 0 with failure probability Pr(X=0)=q=1-p. For
formally, the Bernoulli distribution is summarized as follows:
notation: Bern(p), where 0<p<1 is the probability of success.
support: X={0, 1}
p.m.f: Pr[X=0]=q=1-p, Pr[X=1]=p
mean: E[X]=p
variance: Var[X]=p(1-p)
It is a special case of Binomial distribution B(n, p). Bernoulli distribution is B(1, p).
Binomial Distribution
The Binomial distribution is the discrete probability distribution of the number of successes in a sequence ofn independent Bernoulli trials with success probabilityp, denoted asX~B(n, p).The Binomial distribution is often used to model the number of successes in a sample of sizen drawn with replacement from a population of sizeN. If the sampling is carried out without replacement, the draws are not independent
and so the resulting distribution is a hypergeometric distribution, not a binomial one.
The Binomial distribution is summarized as follows:
notation: B(n, p), where n is the number of trials and p is the success probability in each trial
support: k = {0, 1, ..., n} the number of successes
p.m.f:
mean: np
variance: np(1-p)
If n is large enough, then the skew of the distribution is not too great. In this case, a reasonable approximation to B(n, p) is given by the normal distribution:
since a large
n will result in difficulty to compute the p.m.f of Binomial distribution.
one rule to determine if such approximation is reasonable, or if n is large enough is that both np and np(1-p) must be greater than 5. If both are greater than 15 then the approximation should be good.
A second rule is than for n>5, the normal approximation is adequate if:
Another commonly used rule holds that the normal approximation is appropriate only if everything within 3 standard deviation of its mean is within the range of possible values, that is if:
To improve the accuracy of the approximation, we usually use a correction factor to take into account that the binomial random variable is discrete while the normal random variable is continuous. In particular, the basic idea is to treat
the discrete value k as the continuous interval from k-0.5 to k+0.5.
In addition, Poisson distribution can be used to approximate the Binomial distribution when n is very large. A rule of thumb stating that the Poisson distribution is a good approximation oof the binomial distribution if n is at least 20 and p is smaller
than or equal to 0.05, and an excellent approximation if n>=100, and np<=10:
Poisson Distribution
Poisson distribution: Let X be a discrete random variable taking values in the set of integer numberswith probability:
My understanding. Poisson distribution describes the fact that the probability of drawing a specific integer from a set of integers is not uniform. For example, it is well-known that if someone
is asked to pick a random integer from 1-10, some integers are occurring with greater probability whereas some others happen with lower probability. Although it seems that all possible integers get equal chance to be picked, it is not true in real case. I
think this may be due to subjectivity of people, i.e., some one prefers larger values (i.e., lager lambda) while other tends to pick smaller ones (i.e., smaller lambda). This point needs to be verified as I got this feeling totally from intuitions.
The Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independent of the time since
the last event.
The Poisson distribution is summarized as follows.
notation:
, where
is a real number, indicating the number of events occurring that will be observed in the time interval
.
support: k = {0, 1, 2, 3, ...}
mean:
variance:
Applications of Poisson distribution
Telecommunication: telephone calls arriving in a system
Management: customers arriving at a counter or call center
Civil engineering: cars arriving at a traffic light
Generating Poisson random variables
algorithm poisson_random_number: init: Let,
, and
. do:
Generate uniform random number u in [0, 1], and let
while p>L. return k-1.
References
Paola Sebastiani, A tutorial on probability theoryMehryar Mohri, Introduction to Machine Learning - Basic Probability Notations.
相关文章推荐
- Study notes for Discrete Probability Distribution
- Study notes for Continuous Probability Distributions
- Study notes for Gaussian Mixture Model
- study notes for EJB 3 in action (I)
- Study notes for B-tree and R-tree
- CSS Study Notes— Selectors for Class/Div/Attribute
- JQuery Study Notes— A small demo for Smooth Animated Menu
- Study notes for Metric Trees
- Study notes for Decision Trees
- Study notes for Principal Component Analysis
- Study notes for Clustering and K-means
- Study note for Continuous Probability Distributions
- Study notes for OpenCV——第九节 矩阵的维度、通道和矩阵的访问
- Study notes for Clustering and K-means
- Study notes for OpenCV——第三节:Opencv一个简单的程序:显示图像
- Study notes for OpenCV——第七节 OpenCV基本数据结构
- The launcher3 study notes for ICon change
- Study notes for OpenCV——第二节:Opencv的结构与内容
- Study notes for OpenCV——第八节 CvMat结构体与矩阵的创建
- Study notes for B-tree and R-tree