Respuesta :
Consider a sequence of random variables X1, X2, X3, ⋯ that is defined on an underlying sample space S. For simplicity, let us assume that S is a finite set, so we can write
S = {s₁, s₂,⋯,sₙ }.
Remember that each Xₙ is a function from S to the set of real numbers. Thus, we may write
Xₙ (sₐ) = xₙₐ, for a=1,2,⋯,k.
After this random experiment is performed, one of the sₐ' s will be the outcome of the experiment, and the values of the Xₙ's are known. If sₐ is the outcome of the experiment, we observe the following sequence:
x₁ₐ ,x₂ₐ,x₃ₐ,⋯.
- Mean Square Convergence:
Let [Xₙ] be a sequence of square integral random variables defined on a sample space Omega. We say that [Xₙ] is mean-square convergent (or convergent in mean-square) if and only if there exists a square integral random variable X such that
[tex]\lim_{n \to \infty} E[ X_{n} - X^{2} ][/tex]
- Distribution Convergence:
Convergence in distribution is in some sense the weakest type of convergence. All it says is that the CDF of Xₙ's converges to the CDF of X as n goes to infinity. It does not require any dependence between the Xₙ's and X. We saw this type of convergence before when we discussed the central limit theorem. To say that Xₙ converges in distribution to X, we write:
Xₙ →d X.
- Probability Convergence:
Convergence in probability is stronger than convergence in distribution. In particular, for a sequence X₁, X₂, X₃, ⋯ to converge to a random variable X, we must have that P(|Xₙ−X|≥ϵ) goes to 0 as n→∞, for any ϵ>0. To say that Xₙ converges in probability to X, we write:
Xₙ →p X.
Learn more about Convergence:
https://brainly.com/question/15415793
#SPJ4