The law of the distribution of probabilities of a two-dimensional random variable. The law of distribution of a two-dimensional random variable

An ordered pair (x, y) of random variables x and y is called a two-dimensional random variable, or a random vector of two-dimensional space. The two-dimensional random value (X, Y) is also called the random system x and y. The set of all possible values \u200b\u200bof the discrete random variable with their probabilities is called the law of the distribution of this random variable. The discrete two-dimensional random value (X, Y) is considered specified if its distribution law is known:

P (x \u003d x i, y \u003d y j) \u003d p ij, i \u003d 1,2 ..., n, j \u003d 1,2 ..., m

Appointment of service. With the help of the service for the specified distribution law, you can find:

  • rows of distribution x and y, mathematical expectation m [x], m [y], dispersion D [x], d [y];
  • covariance COV (X, Y), correlation coefficient R x, y, conditional range of distribution X, conditional mathematical expectation m;
In addition, the answer is given to the question, "is the random variables of X and Y?".

Instruction. Specify the dimension of the probability distribution matrix (number of rows and columns) and its appearance. The solution obtained is saved in the Word file.

Example number 1. Two-dimensional discrete random value has a distribution table:

Y / X. 1 2 3 4
10 0 0,11 0,12 0,03
20 0 0,13 0,09 0,02
30 0,02 0,11 0,08 0,01
40 0,03 0,11 0,05 Q.
Find the quantity q and the correlation coefficient of this random variable.

Decision. The value q will be found from the condition σp ij \u003d 1
Σp ij \u003d 0.02 + 0.03 + 0.11 + ... + 0.03 + 0.02 + 0.01 + q \u003d 1
0.91 + Q \u003d 1. From where Q \u003d 0.09

Using the formula σp (x i., y. j.) \u003d P. i. (j \u003d 1..n), we find a number of distribution X.

Mathematical expectation M [Y].
M [Y] \u003d 1 * 0.05 + 2 * 0.46 + 3 * 0.34 + 4 * 0.15 \u003d 2.59
Dispersion D [Y] = 1 2 *0.05 + 2 2 *0.46 + 3 2 *0.34 + 4 2 *0.15 - 2.59 2 = 0.64
Average quadratic deviation σ (y) \u003d SQRT (D [Y]) \u003d SQRT (0.64) \u003d 0.801

Covariator \u003d 2 × 10 × 0.11 + 3 × 10 × 0.12 + 4 × 10 × 0.03 + 2 × 20 × 0.13 + 3 × 20 × 0.09 + 4 M [X] · M [Y] - cov (X, Y) \u003d M · 20 · 0.02 + 1 × 30 × 0.02 + 2 × 30 × 0.11 + 3 · 30 · 0.08 + 4 · 30 · 0.01 + 1 · 40 · 0.03 + 2 · 40 · 0.11 + 3 · 40 · 0.05 + 4 · 40 · 0.09 - 25.2 · 2.59 \u003d -0.068
Correlation coefficient R xy \u003d cov (x, y) / σ (x) & sigma (y) \u003d -0.068 / (11.531 * 0.801) \u003d -0.00736

Example 2. The data of statistical processing of information relative to two X and Y indicators are reflected in the correlation table. Requires:

  1. write the distribution rows for X and Y and calculate for them selective medium and selective average quadratic deviations;
  2. write the conventional rows of the distribution y / x and calculate the conditional average Y / X;
  3. portray graphically dependence of conditional average y / x from x values;
  4. calculate the selective correlation coefficient Y to X;
  5. write a selective direct regression equation;
  6. pictulate geometrically data of the correlation table and build direct regression.
Decision. An ordered pair (x, y) of random variables x and y is called a two-dimensional random variable, or a random vector of two-dimensional space. The two-dimensional random value (X, Y) is also called the random system x and y.
The set of all possible values \u200b\u200bof the discrete random variable with their probabilities is called the law of distribution of this random variable.
The discrete two-dimensional random value (X, Y) is considered specified if its distribution law is known:
P (x \u003d x i, y \u003d y j) \u003d p ij, i \u003d 1,2 ..., n, j \u003d 1,2 .., m
X / Y.20 30 40 50 60
11 2 0 0 0 0
16 4 6 0 0 0
21 0 3 6 2 0
26 0 0 45 8 4
31 0 0 4 6 7
36 0 0 0 0 3
Events (x \u003d x i, y \u003d y j) form a complete group of events, therefore the sum of all probabilities p ij ( i \u003d 1,2 ..., n, j \u003d 1,2 .., m) specified in the table is 1.
1. Dependence of random variables x and y.
We find the series of distribution X and Y.
Using the formula σp (x i., y. j.) \u003d P. i. (j \u003d 1..n), we find a number of distribution X. Mathematical expectation M [Y].
M [Y] \u003d (20 * 6 + 30 * 9 + 40 * 55 + 50 * 16 + 60 * 14) / 100 \u003d 42.3
Dispersion D [Y].
D [Y] \u003d (20 2 * 6 + 30 2 * 9 + 40 2 * 55 + 50 2 * 16 + 60 2 * 14) / 100 - 42.3 2 \u003d 99.71
Average quadratic deviation σ (y).

Since, P (x \u003d 11, y \u003d 20) \u003d 2 ≠ 2 · 6, then random variables x and y dependent.
2. Conditional distribution law x.
Conditional distribution law x (y \u003d 20).
P (x \u003d 11 / y \u003d 20) \u003d 2/6 \u003d 0.33
P (x \u003d 16 / y \u003d 20) \u003d 4/6 \u003d 0.67
P (x \u003d 21 / y \u003d 20) \u003d 0/6 \u003d 0
P (x \u003d 26 / y \u003d 20) \u003d 0/6 \u003d 0
P (x \u003d 31 / y \u003d 20) \u003d 0/6 \u003d 0
P (x \u003d 36 / y \u003d 20) \u003d 0/6 \u003d 0
Conditional mathematical expectation m \u003d 11 * 0.33 + 16 * 0.67 + 21 * 0 + 26 * 0 + 31 * 0 + 36 * 0 \u003d 14.33
Conditional dispersion d \u003d 11 2 * 0.33 + 16 2 * 0.67 + 21 2 * 0 + 26 2 * 0 + 31 2 * 0 + 36 2 * 0 - 14.33 2 \u003d 5.56
Conditional distribution law x (y \u003d 30).
P (x \u003d 11 / y \u003d 30) \u003d 0/9 \u003d 0
P (x \u003d 16 / y \u003d 30) \u003d 6/9 \u003d 0.67
P (x \u003d 21 / y \u003d 30) \u003d 3/9 \u003d 0.33
P (x \u003d 26 / y \u003d 30) \u003d 0/9 \u003d 0
P (x \u003d 31 / y \u003d 30) \u003d 0/9 \u003d 0
P (x \u003d 36 / y \u003d 30) \u003d 0/9 \u003d 0
Conditional mathematical expectation m \u003d 11 * 0 + 16 * 0.67 + 21 * 0.33 + 26 * 0 + 31 * 0 + 36 * 0 \u003d 17.67
Conditional dispersion d \u003d 11 2 * 0 + 16 2 * 0.67 + 21 2 * 0.33 + 26 2 * 0 + 31 2 * 0 + 36 2 * 0 - 17.67 2 \u003d 5.56
Conditional distribution law x (y \u003d 40).
P (x \u003d 11 / y \u003d 40) \u003d 0/55 \u003d 0
P (x \u003d 16 / y \u003d 40) \u003d 0/55 \u003d 0
P (x \u003d 21 / y \u003d 40) \u003d 6/55 \u003d 0.11
P (x \u003d 26 / y \u003d 40) \u003d 45/55 \u003d 0.82
P (x \u003d 31 / y \u003d 40) \u003d 4/55 \u003d 0.0727
P (x \u003d 36 / y \u003d 40) \u003d 0/55 \u003d 0
Conditional mathematical expectation m \u003d 11 * 0 + 16 * 0 + 21 * 0.11 + 26 * 0.82 + 31 * 0.0727 + 36 * 0 \u003d 25.82
Conditional dispersion d \u003d 11 2 * 0 + 16 2 * 0 + 21 2 * 0.11 + 26 2 * 0.82 + 31 2 * 0.0727 + 36 2 * 0 - 25.82 2 \u003d 4.51
Conditional law of distribution x (y \u003d 50).
P (x \u003d 11 / y \u003d 50) \u003d 0/16 \u003d 0
P (x \u003d 16 / y \u003d 50) \u003d 0/16 \u003d 0
P (x \u003d 21 / y \u003d 50) \u003d 2/16 \u003d 0.13
P (x \u003d 26 / y \u003d 50) \u003d 8/16 \u003d 0.5
P (x \u003d 31 / y \u003d 50) \u003d 6/16 \u003d 0.38
P (x \u003d 36 / y \u003d 50) \u003d 0/16 \u003d 0
Conditional mathematical expectation m \u003d 11 * 0 + 16 * 0 + 21 * 0.13 + 26 * 0.5 + 31 * 0.38 + 36 * 0 \u003d 27.25
Conditional dispersion d \u003d 11 2 * 0 + 16 2 * 0 + 21 2 * 0.13 + 26 2 * 0.5 + 31 2 * 0.38 + 36 2 * 0 - 27.25 2 \u003d 10.94
Conditional distribution law x (y \u003d 60).
P (x \u003d 11 / y \u003d 60) \u003d 0/14 \u003d 0
P (x \u003d 16 / y \u003d 60) \u003d 0/14 \u003d 0
P (x \u003d 21 / y \u003d 60) \u003d 0/14 \u003d 0
P (x \u003d 26 / y \u003d 60) \u003d 4/14 \u003d 0.29
P (x \u003d 31 / y \u003d 60) \u003d 7/14 \u003d 0.5
P (x \u003d 36 / y \u003d 60) \u003d 3/14 \u003d 0.21
Conditional mathematical expectation m \u003d 11 * 0 + 16 * 0 + 21 * 0 + 26 * 0.29 + 31 * 0.5 + 36 * 0.21 \u003d 30.64
Conditional dispersion d \u003d 11 2 * 0 + 16 2 * 0 + 21 2 * 0 + 26 2 * 0.29 + 31 2 * 0.5 + 36 2 * 0.21 - 30.64 2 \u003d 12.37
3. Conditional distribution law y.
Conditional distribution law y (x \u003d 11).
P (y \u003d 20 / x \u003d 11) \u003d 2/2 \u003d 1
P (y \u003d 30 / x \u003d 11) \u003d 0/2 \u003d 0
P (y \u003d 40 / x \u003d 11) \u003d 0/2 \u003d 0
P (Y \u003d 50 / x \u003d 11) \u003d 0/2 \u003d 0
P (y \u003d 60 / x \u003d 11) \u003d 0/2 \u003d 0
Conditional mathematical expectation m \u003d 20 * 1 + 30 * 0 + 40 * 0 + 50 * 0 + 60 * 0 \u003d 20
Conditional dispersion d \u003d 20 2 * 1 + 30 2 * 0 + 40 2 * 0 + 50 2 * 0 + 60 2 * 0 - 20 2 \u003d 0
Conditional distribution law y (x \u003d 16).
P (y \u003d 20 / x \u003d 16) \u003d 4/10 \u003d 0.4
P (y \u003d 30 / x \u003d 16) \u003d 6/10 \u003d 0.6
P (y \u003d 40 / x \u003d 16) \u003d 0/10 \u003d 0
P (Y \u003d 50 / x \u003d 16) \u003d 0/10 \u003d 0
P (y \u003d 60 / x \u003d 16) \u003d 0/10 \u003d 0
Conditional mathematical expectation m \u003d 20 * 0.4 + 30 * 0.6 + 40 * 0 + 50 * 0 + 60 * 0 \u003d 26
Conditional dispersion d \u003d 20 2 * 0.4 + 30 2 * 0.6 + 40 2 * 0 + 50 2 * 0 + 60 2 * 0 - 26 2 \u003d 24
Conditional distribution law y (x \u003d 21).
P (y \u003d 20 / x \u003d 21) \u003d 0/11 \u003d 0
P (y \u003d 30 / x \u003d 21) \u003d 3/11 \u003d 0.27
P (y \u003d 40 / x \u003d 21) \u003d 6/11 \u003d 0.55
P (y \u003d 50 / x \u003d 21) \u003d 2/11 \u003d 0.18
P (y \u003d 60 / x \u003d 21) \u003d 0/11 \u003d 0
Conditional mathematical expectation m \u003d 20 * 0 + 30 * 0.27 + 40 * 0.55 + 50 * 0.18 + 60 * 0 \u003d 39.09
Conditional dispersion d \u003d 20 2 * 0 + 30 2 * 0.27 + 40 2 * 0.55 + 50 2 * 0.18 + 60 2 * 0 - 39.09 2 \u003d 44.63
Conditional distribution law y (x \u003d 26).
P (y \u003d 20 / x \u003d 26) \u003d 0/57 \u003d 0
P (y \u003d 30 / x \u003d 26) \u003d 0/57 \u003d 0
P (y \u003d 40 / x \u003d 26) \u003d 45/57 \u003d 0.79
P (y \u003d 50 / x \u003d 26) \u003d 8/57 \u003d 0.14
P (y \u003d 60 / x \u003d 26) \u003d 4/57 \u003d 0.0702
Conditional mathematical expectation m \u003d 20 * 0 + 30 * 0 + 40 * 0.79 + 50 * 0.14 + 60 * 0.0702 \u003d 42.81
Conditional dispersion d \u003d 20 2 * 0 + 30 2 * 0 + 40 2 * 0.79 + 50 2 * 0.14 + 60 2 * 0.0702 - 42.81 2 \u003d 34.23
Conditional distribution law y (x \u003d 31).
P (y \u003d 20 / x \u003d 31) \u003d 0/17 \u003d 0
P (y \u003d 30 / x \u003d 31) \u003d 0/17 \u003d 0
P (y \u003d 40 / x \u003d 31) \u003d 4/17 \u003d 0.24
P (y \u003d 50 / x \u003d 31) \u003d 6/17 \u003d 0.35
P (y \u003d 60 / x \u003d 31) \u003d 7/17 \u003d 0.41
Conditional mathematical expectation m \u003d 20 * 0 + 30 * 0 + 40 * 0.24 + 50 * 0.35 + 60 * 0.41 \u003d 51.76
Conditional dispersion d \u003d 20 2 * 0 + 30 2 * 0 + 40 2 * 0.24 + 50 2 * 0.35 + 60 2 * 0.41 - 51.76 2 \u003d 61.59
Conditional distribution law y (x \u003d 36).
P (y \u003d 20 / x \u003d 36) \u003d 0/3 \u003d 0
P (y \u003d 30 / x \u003d 36) \u003d 0/3 \u003d 0
P (y \u003d 40 / x \u003d 36) \u003d 0/3 \u003d 0
P (Y \u003d 50 / x \u003d 36) \u003d 0/3 \u003d 0
P (y \u003d 60 / x \u003d 36) \u003d 3/3 \u003d 1
Conditional mathematical expectation m \u003d 20 * 0 + 30 * 0 + 40 * 0 + 50 * 0 + 60 * 1 \u003d 60
Conditional dispersion d \u003d 20 2 * 0 + 30 2 * 0 + 40 2 * 0 + 50 2 * 0 + 60 2 * 1 - 60 2 \u003d 0
Covariator.
cov (x, y) \u003d m - m [x] · m [y]
cOV (X, Y) \u003d (20 · 11 · 2 + 20 · 16 · 4 + 30 · 16 · 6 + 30 · 21 · 3 + 40 · 21 · 6 + 50 · 21 · 2 + 40 · 26 · 45 + 50 · 26 · 8 + 60 · 26 · 4 + 40 · 31 · 4 + 50 · 31 · 6 + 60 · 31 · 7 + 60 · 36 · 3) / 100 - 25.3 · 42.3 \u003d 38.11
If random variables are independent, then their covariaration is zero. In our case, COV (X, Y) ≠ 0.
Correlation coefficient.


The equation of linear regression with Y to X has the form:

The linear regression equation with X on Y has the form:

We find the necessary numeric characteristics.
Selective average:
x \u003d (20 (2 + 4) + 30 (6 + 3) + 40 (6 + 45 + 4) + 50 (2 + 8 + 6) + 60 (4 + 7 + 3)) / 100 \u003d 42.3
y \u003d (20 (2 + 4) + 30 (6 + 3) + 40 (6 + 45 + 4) + 50 (2 + 8 + 6) + 60 (4 + 7 + 3)) / 100 \u003d 25.3
Dispersions:
σ 2 x \u003d (20 2 (2 + 4) + 30 2 (6 + 3) + 40 2 (6 + 45 + 4) + 50 2 (2 + 8 + 6) + 60 2 (4 + 7 + 3) ) / 100 - 42.3 2 \u003d 99.71
σ 2 y \u003d (11 2 (2) + 16 2 (4 + 6) + 21 2 (3 + 6 + 2) + 26 2 (45 + 8 + 4) + 31 2 (4 + 6 + 7) + 36 2 (3)) / 100 - 25.3 2 \u003d 24.01
Where we get rms deviations:
Σ x \u003d 9.99 and σ y \u003d 4.9
and covariance:
COV (X, Y) \u003d (20 · 11 · 2 + 20 · 16 · 4 + 30 · 16 · 6 + 30 · 21 · 3 + 40 · 21 · 6 + 50 · 21 · 2 + 40 · 26 · 45 + 50 · 26 · 8 + 60 · 26 · 4 + 40 · 31 · 4 + 50 · 31 · 6 + 60 · 31 · 7 + 60 · 36 · 3) / 100 - 42.3 · 25.3 \u003d 38.11
Determine the correlation coefficient:


We write the equation of regression lines Y (x):

and calculating, we get:
y x \u003d 0.38 x + 9.14
We write the equation of regression lines x (y):

and calculating, we get:
x y \u003d 1.59 y + 2.15
If you build points defined by the table and regression lines, we will see that both lines pass through the coordinates point (42.3; 25.3) and the points are located close to the regression lines.
The significance of the correlation coefficient.

According to the stage of Student with the level of significance α \u003d 0.05 and the degrees of freedom k \u003d 100-m - 1 \u003d 98 we find T Crete:
t Crete (N-M-1; α / 2) \u003d (98; 0.025) \u003d 1.984
where m \u003d 1 is the number of explanatory variables.
If T Navel\u003e T critic, the resulting value of the correlation coefficient is recognized as significant (zero hypothesis that approves the equality zero correlation coefficient is rejected).
Since T Navel\u003e T Crete, we deflect the hypothesis of the equality 0 of the correlation coefficient. In other words, the correlation coefficient is statistically meaning.

The task. The number of hits of the values \u200b\u200bof random variables X and Y at the corresponding intervals are shown in the table. According to these data, find a selective correlation coefficient and selective equations of direct regression lines y per x and x per Y.
Decision

Example. The probability distribution of a two-dimensional random variable (x, y) is set to a table. Find the laws of the distribution of the components of the values \u200b\u200bX, Y and the correlation coefficient P (x, y).
Download decision

The task. A two-dimensional discrete value (X, Y) is set by the distribution law. Find the laws of the distribution of components X and Y, covariance and correlation coefficient.

Two-dimensional call a random amount ( X., Y.), the possible values \u200b\u200bof which there are pairs of numbers ( x, U.). Compound X. and Y.considered simultaneously form system Two random variables.

The two-dimensional value can be geometrically interpreted as a random point. M.(H.; Y.) on surface xoy. either as a random vector Om..

Discrete Called the two-dimensional value of which is discrete.

Continuous Called the two-dimensional value of which are continuous.

Distribution law The probabilities of the two-dimensional random variable are called the correspondence between the possible values \u200b\u200band their probabilities.

The distribution law of the discrete two-dimensional random variable can be specified: a) in the form of a table with a double input containing possible values \u200b\u200band their probabilities; b) analytically, for example in the form of a distribution function.

Distribution function The probabilities of the two-dimensional random variable call the function F (x, y)defining for each pair of numbers (x, y) the probability that X. will take a value less than X and at the same time Y. will take a value less y.:

F (x, y) \u003d p (x< x, Y < y).

Geometrically, this equality can be interpreted as follows: F (x, y) There is a possibility that a random point ( X, Y.) It will fall into an endless quadrant with a vertex ( x, y)Located to the left and below this vertex.

Sometimes instead of the term "distribution function" use the term "integral function".

The distribution function has the following properties:

Property 1.. Distribution function values \u200b\u200bsatisfy double inequality

0 ≤ f (x, y) ≤ 1.

Property 2.. The distribution function is a non-breaking function for each argument:

F (x 2, y) ≥ f (x 1, y), if x 2\u003e x 1,

F (x, y 2) ≥ f (x, y 1), if y 2\u003e y 1.

Property 3.. Extreme ratios take place:

1) f (-∞, y) \u003d 0,

3) f (-∞, -∞) \u003d 0,

2) f (x, -∞) \u003d 0,

4) F (∞, ∞) \u003d 1.

Property 4.. but) Under=∞ the system distribution function becomes the distribution function of the X:

F (x, ∞) \u003d f 1 (x).

b) With X. = ∞ the system distribution function becomes the distribution function of the Y:



F (∞, y) \u003d f 2 (y).

Using the distribution function, you can find the likelihood of a random point in a rectangle. x 1< X < x 2 , y 1 < Y < у 2 :

P (x 1< X < x 2 , y 1 < Y < у 2) = – .

The density of the joint distribution of probability (two-dimensional probability density) Continuous two-dimensional random variables are called the second mixed derivative from the distribution function:

Sometimes instead of the term "two-dimensional probability density", the term "differential system function" is used.

The co-distribution density can be viewed as the limit of the probability ratio of incoming random point into a rectangle with the parties D x. and D. y. to the square of this rectangle, when both parties strive for zero; geometrically can be interpreted as a surface called surface distribution.

Knowing distribution density, you can find the distribution function by the formula

The probability of incoming random point (x, y) to the region D is determined by equality

Two-dimensional probability density has the following properties:

Property 1.. Two-dimensional probability density is non-negative:

f (x, y) ≥ 0.

Property 2.. Double immunity integral with infinite limits from two-dimensional probability density is equal to one:

In particular, if all possible values \u200b\u200b(x, y) belong to the final region d, then

226. The probability distribution of the discrete two-dimensional random variable is set:

Find the laws of distribution of components.

228. The distribution function of a two-dimensional random variable is set.

Find the likelihood of incoming a random point ( X, Y. x. = 0, x. \u003d P / 4, y. \u003d p / 6, y. \u003d P / 3.

229. Find the likelihood of incoming a random point ( X, Y.) in a rectangle limited to straight x. = 1, x. = 2, y. = 3, y. \u003d 5, if the distribution function is known

230. A function of distribution of a two-dimensional random variable is set.

Find two-dimensional system probability density.

231. In the circle x 2 + y 2 ≤ R 2 two-dimensional probability density; outside the circle f (x, y) \u003d0. Find: a) permanent C.; b) the likelihood of incoming a random point ( X, Y.) in the range of radius r. \u003d 1 centered at the beginning of the coordinates if R. = 2.

232. The first quadrant is given a function of the distribution of a system of two random variables. F (x, y) \u003d 1 + 2 - x - 2 - y + 2 - x- y. Find: a) two-dimensional density of the probability of the system; b) the likelihood of incoming a random point ( X, Y.) in a triangle with vertices A.(1; 3), B.(3; 3), C.(2; 8).

8.2. Conditional laws of the distribution of probability components
discrete two-dimensional random variable

Let the components X. and Y. Discrete and have respectively the following possible values: x 1, x 2, ..., x n; y 1, y 2, ..., y m.

Conditional distribution of the component X for Y \u003d y j (J retains the same value for all possible values \u200b\u200bx) is called a set of conditional probabilities

p (x 1 | y j), p (x 2 | y j), ..., p (x n | y j).

Similarly, the conditional distribution of Y.

The conditional probabilities of the components X and Y are calculated according to the formulas

To control calculations, it is advisable to make sure that the sum of the probabilities of the conditional distribution is equal to one.

233. Discrete two-dimensional random variable ( X, Y.):

Find: a) Conditional distribution law X. provided that Y.\u003d 10; b) conditional distribution law Y. provided that X.=6.

8.3. Introducing densities and conditional distribution laws
constituting continuous two-dimensional random variables

The distribution density of one of the components is incorruptible to the incomprehensible integral with infinite limits from the density of the joint distribution of the system, and the integration variable corresponds to the other component:

Here it is assumed that the possible values \u200b\u200bof each of the components belong to the entire numeric axis; If possible values \u200b\u200bbelong to the final interval, then the corresponding finite numbers are taken as the integration limits.

The conditional density of the distribution of the component X For a given meaning Y \u003d y. Call the ratio of the co-distribution density system to the distribution density component Y.:

Similarly, the conditional density of the distribution of the component is determined Y.:

If the conditional distribution density of random variables X. and Y. equal to their unconditional densities, then such values \u200b\u200bare independent.

Uniform call the distribution of a two-dimensional continuous random variable ( X, Y.) if in the area you own all possible values \u200b\u200b( x, U.), the co-distribution density of probability saves constant value.

235. The density of the joint distribution of a continuous two-dimensional random variable (X, Y) is set.

Find: a) density distribution of components; b) Conditional densities of the distribution constituting.

236. The density of the joint distribution of a continuous two-dimensional random variable ( X, Y.)

Find: a) permanent multiplier C.; b) the density of the distribution of components; c) conditional density distribution of components.

237. Continuous two-dimensional random variable ( X, U.) It is distributed evenly inside the rectangle with the symmetry center at the beginning of the coordinates and parties 2a and 2b, parallel to the coordinate axes. Find: a) two-dimensional density of the probability of the system; b) the density of the distribution of components.

238. Continuous two-dimensional random value ( X, U.) is evenly distributed inside a rectangular triangle with vertices O.(0; 0), BUT(0; 8), IN(8; 0). Find: a) two-dimensional density of the probability of the system; b) density and conditional density distribution of components.

8.4. Numeric characteristics of the continuous system
Two random variables

Knowing the distribution density of the components of the X and Y continuous two-dimensional random variable (x, y), they can be found their mathematical expectations and dispersion:

Sometimes it is convenient to use formulas containing two-dimensional probability density (double integrals are taken from the area of \u200b\u200bpossible system values):

Initial, torque n k, s order k + S. Systems ( X, Y.) Call the mathematical expectation of the work X k y s:

n k, s \u003d m.

In particular,

n 1.0 \u003d M (x), n 0.1 \u003d m (y).

Central moment m k, s order k + S. Systems ( X, Y.) Call the mathematical expectation of the work of deviations, respectively k.-y I. s.- And degrees:

m k, s \u003d m (k ∙ s).

In particular,

m 1.0 \u003d m \u003d 0, m 0.1 \u003d m \u003d 0;

m 2.0 \u003d m 2 \u003d d (x), m 0.2 \u003d m 2 \u003d d (y);

Correlation torque M xu Systems ( X, Y.) call the central moment m 1,1 About 1 + 1:

m xu \u003d m (∙).

Correlation coefficient The values \u200b\u200bX and Y are called the ratio of the correlation moment to the product of the average quadratic deviations of these quantities:

r xy \u003d m xy / (s x s y).

Correlation coefficient - dimensionless value, and | r XY.| ≤ 1. The correlation coefficient is used to evaluate the linear communication between X. and Y.: the closer the absolute value of the correlation coefficient to one, the connection is stronger; The closer the absolute value of the correlation coefficient to zero, the connection is weaker.

Correlated Called two random variables if their correlation is different from zero.

Uncorrelated Two random variables are called if their correlation is zero.

Two correlated values \u200b\u200bare also dependent; If two values \u200b\u200bare dependent, then they can be both correlated and uncorrelated. Of the independence of the two magnitudes, their non-corrosionism follows, but from non-corrosion, it is still impossible to conclude about the independence of these values \u200b\u200b(for normal distributed values \u200b\u200bfrom the non-corrosion of these values, their independence flows).

For continuous values \u200b\u200bof the X and Y, the correlation moment can be found by formulas:

239. The co-distribution density of the continuous two-dimensional random variable (X, Y) is set:

Find: a) mathematical expectations; b) Dispersions of the components X and Y.

240. The co-distribution density of the continuous two-dimensional random variable (X, Y) is set:

Find mathematical expectations and dispersion components.

241. The co-distribution density of the continuous two-dimensional random variable is set ( X, y): f (x, y) \u003d 2 cosx cosy squared 0 ≤ x. ≤p / 4, 0 ≤ y. ≤p / 4; Outside square f (x, y) \u003d 0. Find the mathematical expectations of the components.

242. Prove that if the two-dimensional probability density of the system of random variables ( X, Y.) can be represented as a product of two functions, one of which depends only on x.and the other - only from y., then values X. and Y. Independent.

243. Prove that if X. and Y. Linear dependence are associated Y. = aX. + b., The absolute value of the correlation coefficient is equal to one.

Decision. To determine the correlation coefficient,

r xy \u003d m xy / (s x s y).

m xu \u003d m (∙). (*)

We find mathematical expectation Y.:

M (y) \u003d m \u003d am (x) + b. (**)

Substituting (**) in (*), after elementary transformations we get

m xu \u003d am 2 \u003d ad (x) \u003d AS 2 x.

Considering that

Y - m (y) \u003d (ax + b) - (am (x) + b) \u003d a,

find dispersion Y.:

D (y) \u003d m 2 \u003d a 2 m 2 \u003d a 2 s 2 x.

From here s y \u003d | a | s x. Consequently, the correlation coefficient

If a a. \u003e 0, then r XY. \u003d 1; if a a. < 0, то r XY. = –1.

So, | r XY.| \u003d 1, which was required to prove.

Definition.If two random variables are specified on the same space of elementary events H. and Y,they say what is asked two-dimensional random quantity (x, y) .

Example. Machine stamps steel tiles. Length is controlled H. And width Y.. - Two-dimensional St.

St. H. and Y. Have their own distribution functions and other characteristics.

Definition. The function of the distribution of a two-dimensional random variable (x, y) The function is called.

Definition. The law of distribution of the discrete two-dimensional random variable (x, Y) called table

For two-dimensional discrete sv.

Properties:

2) if, then ; If, then ;

4) - Distribution function H.;

- Distribution function Y.

The probability of getting the values \u200b\u200bof two-dimensional St. into a rectangle:

Definition. Two-dimensional random amount (X, y) called continuous if its distribution function continuous on and has everywhere (except, perhaps a finite number of curves) continuous mixed private derivative of the 2nd order .

Definition. The density of the joint distribution of probabilities of two-dimensional continuous The function is called.

Then, obviously, .

Example 1. Two-dimensional continuous associated distribution function

Then the distribution density is

Example 2. Two-dimensional continuous associated distribution density

We find its distribution function:

Properties:

3) for any area.

Let the co-distribution density be known. Then the distribution density of each of the components of the two-dimensional SV is as follows:

Example 2 (continued).

Distribution density component of two-dimensional St. Some authors call marginaldentations of probability distribution .

Conditional laws of distribution of components of discrete sv.

The conditional probability where.

Conditional distribution law H. With:

H.
R

Similar to where.

Make a conditional distribution law H. for Y \u003d.2.

Then the conditional distribution law

H. -1
R

Definition. The conditional density of the distribution of the component X for a given meaning Y \u003d y. called.

Similarly:.

Definition. Conditional mathematical Expectation of discrete St. Y Then is called where - see above.

Hence, .

For continuousSt. Y. .

Obviously, it is an argument function h.. This feature is called the regression function y on x .

Similarly determined regression function x on y : .

Theorem 5. (On the distribution function of independent SV)

St. H.and Y.

Corollary. Continuous St. H.and Y. are independent if and only when.

In Example 1 at. Consequently, St. H.and Y.independent.

Numerical characteristics of components of a two-dimensional random variable

For discrete St.:

For continuous sv:.

Dispersion and secondary quadratic deviation for all SV are determined by the same formulas known to us:

Definition.Point is called center dispersion two-dimensional St.

Definition. Covariance (correlation torque) Sv is called

For discrete sv:.

For continuous sv:.

Formula for calculating :.

For independent sv.

The inconvenience of the characteristics is its dimension (the square of the unit of measurement of the components). This shortcoming is free next value.

Definition. Correlation coefficient St. H. and Y. called

For independent sv.

For any couple of sv . It is known that then and only when, where.

Definition. St. H. and Y. called uncorrelated , if a .

The relationship between correlated and the dependence of SV:

- if St. H. and Y. correlated, i.e. , they are dependent; The opposite is not true;

- if St. H. and Y. Independent, T. ; The opposite is not true.

Note 1. If St. H. and Y. distributed according to the normal law and , then they are independent.

Note 2. Practical value as a measure, dependence is justified only when the joint distribution of the pair is normal or approximately normal. For arbitrary St. H. and Y. You can come to an erroneous output, i.e. may be even when H. and Y. associated with strict functional dependence.

Note3. In mathematical statistics, the correlation is called probabilistic (statistical) dependence between values \u200b\u200bthat does not have, generally speaking, strictly functional nature. The correlation dependence occurs when one of the values \u200b\u200bdepends not only on this second, but also on a number of random factors, or when among the conditions on which one or another value depends, there are general conditions for them.

Example 4.For St. H. and Y. from example 3 find .

Decision.

Example 5.The density of the joint distribution of two-dimensional St.

Definition 2.7. This is a pair of random numbers (X, Y) or point on the coordinate plane (Fig. 2.11).

Fig. 2.11.

A two-dimensional random value is a special case of a multidimensional random variable, or a random vector.

Definition 2.8. Casual vector - Is it a random function?, (/) with a finite set of possible values \u200b\u200bof the argument t, the value of which in any meaning t. It is a random variable.

A two-dimensional random variable is called continuous if its coordinates are continuous, and discrete if its coordinates are discrete.

To set the law of distribution of two-dimensional random variables - this means to establish a correspondence between its possible values \u200b\u200band the likelihood of these values. According to methods of task, random variables are divided into continuous and discrete, although there are general ways to set the law of distribution of any SV.

Discrete two-dimensional random variable

The discrete two-dimensional random variable is set using the distribution table (Table 2.1).

Table 2.1.

Distribution table (joint distribution) of SV ( X., Y)

The elements of the table are determined by the formula

Properties of distribution table items:

Distribution for each coordinate is called one-dimensionalor marginal:

r 1> \u003d P (x \u003d .g,) - marginal distribution of X.;

p ^ 2) = P (y \u003d y,) - Marginal distribution of St. U.

Communication joint distribution of X. and y given a set of probabilities [P ()), I = 1,..., n, J. = 1,..., t. (distribution table), and marginal distribution.


Similarly for St. p- 2) \u003d X. r, G.

Task 2.14. Given:

Continuous two-dimensional random variable

/ (x, y) dxdy. - the probability element for a two-dimensional random variable (x, y) is the likelihood of random variance (x, y) into a rectangle with the sides cBC, DY. for dX, DY. -* 0:

f (x, y) - distribution density two-dimensional random variable (x, y). Task / (x, y) We give complete information about the distribution of a two-dimensional random variable.

Marginal distributions are given as follows: by X - the density of the distribution of SV X /, (X); by Y. - the density of the distribution of St. f\u003e (Y).

Setting the law of distribution of a two-dimensional random variable distribution function

A universal way to specify the distribution law for a discrete or continuous two-dimensional random variable is the distribution function F (x, y).

Definition 2.9. Function of the distribution F (x, y) - the probability of joint appearance of events (Hu), i.e. F (x 0, y n) \u003d \u003d P (H. y) abandoned on the coordinate plane, to get into an infinite quadrant with a vertex at the point M (x 0, y and) (in the shaded in fig. 2.12 region).

Fig. 2.12. Illustration of the distribution function f ( x, y)

Properties function F (x, y)

  • 1) 0 1;
  • 2) F (-oo, -OO) \u003d. F (x, -oo) \u003d f (-oo, y) \u003d. 0; F (oo, oo) \u003d 1;
  • 3) F (x, y) - inconsistent for each argument;
  • 4) F (x, y) - continuous on the left and below;
  • 5) Consistency of distributions:

F (x, X: F (x, oo) \u003d f, (x); F (y, oo) - marginal distribution Y f (oo y) \u003d f 2 (y).Communication / (x, y) from F (x, y):

Combining a joint density with marginal. Dana f (x, y). We get marginal distribution density f (x), f 2 (y). "


The case of independent coordinates of a two-dimensional random variable

Definition 2.10. St. X.and Ynevi-dependent (NZ) if any events associated with each of these SV are independent. From the definition of the NZ SV follows:

  • 1 ) Pij \u003d p x) pf
  • 2 ) F (x, y) \u003d f L (x) F 2 (y).

It turns out that for independent X. and Y. Made I.

3 ) F (x, y) \u003d j (x) f, (y).

We prove that for independent X. and Y 2) 3). Evidence, a) Let 2) be fulfilled), i.e.

in the same time F (x, y) \u003d F J. f (U, V) DUDV, from where and follows 3);

b) let it now 3), then


those. True 2).

Consider the tasks.

Task 2.15. The distribution is specified by the following table:

Build marginal distributions:

Receive P (x \u003d 3, U. = 4) = 0,17 * P (x \u003d 3) p (y \u003d 4) \u003d 0.1485 \u003d\u003e \u003d\u003e X. and are underdeveloped.

Distribution function:


Task 2.16. The distribution is specified by the following table:

Receive P TL \u003d. 0.2 0.3 \u003d 0.06; P 12 \u003d 0,2? 0.7 \u003d 0.14; P 2L = 0,8 ? 0,3 = = 0,24; P 22 - 0.8 0.7 \u003d 0.56 \u003d\u003e sv X. and Y. NZ.

Task 2.17. Dana / (x, y) \u003d. 1 / me Ехр | -0.5 (d "+ 2h +. 5g / 2)]. To find Oh) and / AU) -

Decision

(Consider yourself).

Let the two-dimensional random value of $ (x, y) $.

Definition 1.

The law of the distribution of the two-dimensional random variable $ (x, y) $ is called many possible pairs of numbers $ (x_i, \\ y_j) $ (where $ x_i \\ epsilon x, \\ y_j \\ epsilon y $) and their probabilities $ p_ (ij) $ .

Most often, the law of distribution of a two-dimensional random variable is recorded as a table (Table 1).

Figure 1. The law of the distribution of a two-dimensional random variable.

Recall now Theorem on the addition of probabilities of independent events.

Theorem 1.

The probability of the amount of the finite number of independent events $ (\\ a) _1 $, $ (\\ a) _2 $, ..., $ \\ (\\ a) _n $ is calculated by the formula:

Using this formula, you can get the distribution laws for each component of a two-dimensional random variable, that is:

From here it will follow that the sum of all probabilities of the two-dimensional system has the following form:

Consider in detail (gradually) the problem associated with the concept of the distribution law of the two-dimensional random variable.

Example 1.

The law of distribution of a two-dimensional random variable is set as follows:

Figure 2.

Find the laws of the distribution of random variables $ x, \\ y $, $ x + y $ and check in each case the fulfillment of equality of the total amount of probability unit.

  1. Find at first the distribution of a random variable of $ x $. A random value of $ x $ can take values \u200b\u200b$ x_1 \u003d 2, $ $ x_2 \u003d $ 3, $ x_3 \u003d $ 5. To find the distribution we will use the theorem 1.

Find at first the amount of the probabilities of $ x_1 $ as follows:

Figure 3.

Similarly, we find $ p \\ left (x_2 \\ right) $ and $ p \\ left (x_3 \\ right) $:

\ \

Figure 4.

  1. We now find the distribution of the random variable of $ y $. Random value $ y $ can take values \u200b\u200b$ x_1 \u003d 1, $ $ x_2 \u003d $ 3, $ x_3 \u003d $ 4. To find the distribution we will use the theorem 1.

Find at the beginning of the probability of $ y_1 $ as follows:

Figure 5.

Similarly, we find $ p \\ left (y_2 \\ right) $ and $ p \\ left (y_3 \\ right) $:

\ \

It means that the amount of distribution of the value of $ x $ has the following form:

Figure 6.

Check the fulfillment of equality of the full amount of probabilities:

  1. It remains to find the law of the distribution of the random variable of $ x + y $.

Denote it for convenience through $ z $: $ z \u003d x + y $.

Initially, we find what values \u200b\u200bthis value can take. To do this, we will pairwise add the values \u200b\u200bof the values \u200b\u200bof $ x $ and $ y $. We obtain the following values: 3, 4, 6, 5, 6, 8, 6, 7, 9. Now, by discarding the coincidences, we obtain that the random value of $ x + y $ can take the values \u200b\u200bof $ z_1 \u003d 3, \\ z_2 \u003d 4 , \\ z_3 \u003d 5, \\ z_4 \u003d 6, \\ z_5 \u003d 7, \\ z_6 \u003d 8, \\ z_7 \u003d 9. \\ $

Find $ p (z_1) $. Since the value of $ z_1 $ is one, then it is as follows:

Figure 7.

Similarly, there are probabilities, except $ P (z_4) $:

We will now find $ p (z_4) $ as follows:

Figure 8.

So, the law of distribution of the value of $ z $ has the following form:

Figure 9.

Check the fulfillment of equality of the full amount of probabilities:

Share with friends or save for yourself:

Loading...