This blog records how I learn the random process in the UCAS.
Course Link
Course : 随机过程(电子与通信类)二班23-24秋季 : 主页 (ucas.ac.cn)
Random Process and its classification
Random Process Definition:
A random process, also known as a stochastic process, is a collection of random variables indexed by a set, often representing time or space. Mathematically, a random process {X(t), t ∈ T} is defined as a family of random variables {X(t), t ∈ T} defined on a probability space (Ω, F, P),
where:
- X(t): is a random variable at index t
- t: belongs to an indexing set T
- Ω: is the sample space
- F: is the σ-algebra of events - P: is the probability measure
$$
A\times B(卡氏积的定义) = {(a,b)|a\in A, b\in B }
$$The two expressions of the random process:
$$
X(t,\omega): T\times \Omega \rightarrow R
$$
Random Experiment:
A random experiment is a process that yields one outcome from a well-defined set of possible outcomes. Mathematically, it can be characterized as follows:
Sample Space ($\Omega$): This is the set of all possible outcomes of the experiment, where each outcome is denoted by a sample point ($\omega$). The sample space is known a priori, meaning before the experiment is conducted.
Reproducibility: The experiment can be repeated under identical conditions, yielding potentially different outcomes in each repetition. This property ensures the statistical reliability of the experiment.
Unpredictability: Although the sample space is known, the specific outcome of a single trial of the experiment cannot be determined in advance; it is realized as a random variable which takes values from the sample space.
By adhering to these principles, a random experiment allows for the systematic study of random phenomena and facilitates the application of probability theory to analyze and predict outcomes.
Event Domain:
$$
\sum = {A| A \subseteq \Omega }
$$
The σ-algebra must satisfy the following conditions:
Non-emptiness: The σ-algebra contains at least the empty set and the sample space itself, that is, $$ \emptyset \in \Sigma \quad \text{and} \quad \Omega \in \Sigma $$
Closure under complement: If an event (A) is in the σ-algebra, then its complement ((A^c) or (\overline{A})) is also in the σ-algebra, mathematically represented as: $$ A \in \Sigma \implies A^c \in \Sigma $$
Closure under countable unions (and intersections): If (A_1, A_2, A_3, \ldots) are events in the σ-algebra, then their countable union (and, by extension, their countable intersection) is also in the σ-algebra, expressed as:
$$ A_i \in \Sigma , \text{ for all } i \implies \bigcup_{i=1}^{\infty} A_i \in \Sigma , \text{ and } \bigcap_{i=1}^{\infty} A_i \in \Sigma $$
By satisfying these conditions, the σ-algebra, (\Sigma), forms the foundational structure upon which probability measures are defined, assigning probabilities to events in a manner consistent with the axioms of probability theory.
Statistical Moments
Expectation (期望)
The expectation or expected value of a random variable (X), denoted as (E[X]), is defined as:
Discrete Case:
$$
E[X] = \sum x_i P(X=x_i)
$$Continuous Case:
$$
E[X] = \int x f(x) dx
$$Deviation (偏差)
The deviation of a random variable (X) from its expected value (E[X]) is given by:
Discrete Case:
$$
D[X_i] = x_i - E[X]
$$Continuous Case:
$$
D[X(x)] = x - E[X]
$$Covariance (协方差)
The covariance between two random variables (X) and (Y), denoted as $Cov(X, Y)$, is defined as:
Discrete Case:
$$
Cov(X, Y) = \sum \sum (x_i - E[X])(y_j - E[Y])P(X=x_i, Y=y_j)
$$Continuous Case:
$$
Cov(X, Y) = \int \int (x - E[X])(y - E[Y])f(x, y) dx dy
$$Pearson Correlation Coefficient (皮尔逊相关系数)
The Pearson correlation coefficient, denoted as ( \rho ) for population and ( r ) for sample, measures the linear relationship between two random variables (X) and (Y). It is defined as the ratio of the covariance of (X) and (Y) to the product of their standard deviations ($\sigma_X$ and (\sigma_Y)). The formula is given by:
$$
\rho_{XY} = \frac{Cov(X, Y)}{\sigma_X \sigma_Y}
$$or in terms of expectations:
$$
\rho_{XY} = \frac{E[(X - E[X])(Y - E[Y])]}{\sigma_X \sigma_Y}
$$
Discrete Case:
$$
\rho_{XY} = \frac{\sum \sum (x_i - E[X])(y_j - E[Y])P(X=x_i, Y=y_j)}{\sigma_X \sigma_Y}
$$Continuous Case:
$$
\rho_{XY} = \frac{\int \int (x - E[X])(y - E[Y])f(x, y) dx dy}{\sigma_X \sigma_Y}
$$The Pearson correlation coefficient ranges between -1 and +1, where +1 indicates a perfect positive linear relationship, -1 indicates a perfect negative linear relationship, and 0 indicates no linear relationship.
These statistical moments are fundamental in understanding the properties and behaviors of random variables and are pivotal in the field of statistics and probability theory.
科莫戈洛夫存在定理(Kolmogorov’s Existence Theorem)
科莫戈洛夫存在定理(Kolmogorov’s Existence Theorem)是概率论中一个非常重要的定理,它提供了一个构造联合概率分布的方法,给定任意一组一维边缘分 布和条件分布。这个定理是由安德烈.尼古拉耶维奇. 科莫戈洛夫在20世纪30年代提出的。
这个定理可以表述为:
给定一系列的随机变量 $X_1, X_2, X_3, \ldots, X_n$ 和它们的边缘分布以及条件分布,如果这些分布满足一定的一致性条件(即Kolmogorov的一致性条件),那么存 在一个概率空间和一系列的随机变量,使得这些随机变量具有给定的边缘分布和条件分布。
这个定理的证明是基于Kolmogorov扩展定理,它使用了测度论的一些基本概念和工具。
$$
X与Y独立\iff X与Y不相关
$$
(条件为:(X,Y)正态, 而不是 X和Y都是正态)
公式
$$ f(x) = \sum_{k} p_k\delta(x-x_k) $$
描述
该公式表示一个离散的点分布或称为Dirac组合。
$ \delta(x-x_k) $ 是Dirac的δ函数。在数学和物理学中,它被定义为一个函数,其值在除了点 $ x = x_k $ 外的所有地方都为零,并且其在整个实数线上的积分为1。简单地说,它在 $ x = x_k $ 处呈现一个“尖峰”。
$ p_k $ 是与位置 $ x_k $ 相关的权重或强度。这表示在 $ x_k $ 位置的δ函数的幅度或强度。
$ \sum_{k} $ 是一个求和符号,表示对所有的 $ k $ 值进行求和。
整体上,函数 $ f(x) $ 是由许多在不同 $ x_k $ 位置的Dirac δ函数组成的组合。每个 δ 函数都被其相应的 $ p_k $ 权重放大或缩小。这种表示方法经常用于描述一个由多个离散事件或粒子组成的分布,其中每个事件或粒子在特定的位置 $ x_k $ 上产生一个“冲击”,并且这个“冲击”的强度由 $ p_k $ 给出。
公式:
$$
f(x) = \sum_{k} p_k\delta(x-x_k)
$$
描述:
该公式表示一个离散型随机变量的概率分布。
$ \delta(x-x_k) $ 是Dirac的δ函数。它在 $ x = x_k $ 处呈现一个“尖峰”,并在其他所有地方为零。
$ p_k $ 是随机变量取值 $ x_k $ 的概率。这意味着,当随机变量的值为 $ x_k $ 时,其概率为 $ p_k $。
$ \sum_{k} $ 是一个求和符号,表示对所有的 $ k $ 值进行求和。在离散型随机变量的情境中,这意味着我们考虑随机变量可能取的所有值。
整体上,函数 $ f(x) $ 描述了一个离散型随机变量的概率分布。对于每一个可能的值 $ x_k $,Dirac δ函数确保只有在 $ x = x_k $ 时函数有值,而该值的幅度或强度由概率 $ p_k $ 给出。这种表示方法提供了一个数学上的方式来描述离散型随机变量的概率分布,其中每个可能的值都有一个与之关联的概率。
条件数学期望
全数学期望公式是一个在概率论和统计学中非常重要的公式,它用于计算复合随机变量的期望。下面是全数学期望公式的推导:
假设我们有一个离散随机变量 $X$ 和一个与 $X$ 相关的随机变量 $Y$。我们想要找到 $Y$ 的期望 $E[Y]$。
首先,我们可以找到 $Y$ 给定 $X = x$ 的条件期望,记作 $E[Y | X = x]$。
然后我们可以使用全数学期望公式来找到 $Y$ 的期望,该公式定义如下:
$$
E[Y] = E[E[Y | X]]
$$
我们可以进一步将其展开为:
$$
E[Y] = \sum_x E[Y | X = x] P(X = x)
$$
对于连续随机变量,这个公式可以改写为积分形式:
$$
E[Y] = \int E[Y | X = x] f_X(x) dx
$$
其中 $f_X(x)$ 是 $X$ 的概率密度函数。
全概率密度函数
$$
P{X\leq x} = \int_{-\infty} ^{+\infty} P{X\leq x|Y\leq y }f_{Y}(y) dy
$$
作业:继续做5,6,7,8