Functions of random variables. Probability and Statistics - Basic Facts Transforming Random Variables Using the Delta Function

66.1. Relation (65.11), which determines the probability density of the transformed variable through the density of the original random variable, can be generalized to the case of transformation of random variables. Let random variables have a joint density, and functions and variables are given. It is necessary to find the joint probability density of random variables:

This problem differs from the general formulation, section 6.4., by the condition - the number of initial random variables is equal to the number of transformed variables. The inverse transformation (66.1) is found as a solution to a system of equations with respect to variables. Moreover, each depends on. The set of such functions forms an inverse transformation. In general, the inverse transformation is ambiguous. Let, - - I be the branch of the inverse transformation, then the following relation is valid:

where the sum is taken over all branches of the inverse transformation,

Jacobian transformation from random variables to random variables.

If from each set of random variables random variables are obtained, then formula (66.2) can be used by supplementing the system with random variables, for example, with such variables. If, then, the random variables from the population are functionally related to the remaining quantities, therefore, the dimensional density will contain delta functions.

Relations (64.4), (64.6) and (66.2) define two methods for solving the problem of calculating the density of a population of random variables obtained by functional transformation of the original random variables with a joint probability density. The main difficulty in applying the first method is calculating the -dimensional integral over a complex domain. In the second method, the main difficulty is finding all the branches of the inverse transformation.

66.2. Let's consider a simple example of calculating the probability density of the sum of two random variables and with the density according to formula (66.2). Obviously, the sum should be chosen as the first transformed quantity: , and as the second (although you can take and). Thus, the functional transformation from,to,is given by the system of equations:

The inverse transformation is the solution of a system of equations with respect to:

The inverse transformation is unique, therefore in (66.2) the sum consists of one term. Let's find the Jacobian of the transformation:

Now (66.2) for takes the form:

The function is the joint probability density of random variables and. Hence the probability density of the sum is found from the consistency condition:

Let's consider the first method for solving the same problem. From (64.4) it follows:

The problem comes down to transforming the integral over the domain defined by the condition. This integral can be represented as:

Hence the probability density:

Hence the probability density:

which coincides with formula (66.7).

Chi - squared probability distribution

67.1. A chi-square distribution with degrees of freedom is the probability distribution of a random variable, where are independent random variables and all are Gaussian with mathematical expectation and variance. In accordance with formula (64.3), the probability distribution function of the random variable is equal to

where is the joint probability density of the quantities. By condition, they are independent, therefore equal to the product of one-dimensional densities:


From (67.1), (67.2) it follows that the probability density of a random variable is determined by the expression:


Analysis of this expression, apparently, is the simplest way to find it, since here and (67.3) can be represented as:

Here the integral is equal to the volume of the region - dimensional space, enclosed between two hyperspheres: - radius and - radius. Since the volume of a hypersphere of radius is proportional, i.e. , That

The volume between two hyperspheres with radii and, which determines, up to a factor, the integral (67.4). Let's substitute (67.5) into (67.4), then

where is a constant that can be determined from the normalization condition:

Let's substitute (67.6) into (67.7), then

Let, then the integral (67.8)

where - gamma is a function of the argument. From (67.8) and (67.9) a constant is determined, the substitution of which into (67.6) leads to the result

67.2. Let's calculate the mathematical expectation and variance of the random variable. From (67.11)


Similarly, the mean square of the quantity is equal to


From (67.12), (67.13) dispersion

67.3. In problems of mathematical statistics, probability distributions associated with the normal distribution are important. These are primarily - distribution (Pearson distribution), - distribution (Student distribution) and - distribution (Fisher distribution). A distribution is a probability distribution of a random variable

where - independent and that's it.

The Student distribution (or - distribution) is the probability distribution of a random variable

where and are independent random variables, and.

The Fisher distribution (- distribution) with degrees of freedom is the probability distribution of a random variable

Chi-square distribution and Maxwell velocity distribution

The Maxwell distribution over the velocities of gas molecules is the probability density distribution of the velocity modulus and is determined by the relation

where is the number of gas molecules, the number of molecules whose velocity modulus lies in the interval, is the gas constant, and is the absolute temperature of the gas. The ratio is the probability that the velocity modulus of a molecule lies in the interval, then the probability density of the velocity modulus.

Distribution (68.1) can be obtained based on the following two simple probabilistic positions defining the ideal gas model. 1). Projections of velocity on the axes of the Cartesian coordinate system are independent random variables. 2). Each velocity projection is a Gaussian random variable with zero expectation and variance. The parameter is set based on experimental data.

Let us determine the probability density of a random variable

Obviously, it has a chi-squared distribution with three degrees of freedom. Therefore, its probability density is determined by formula (67.11) at:

because the. So, (68.3) is the probability density of the squared relative velocity.

The next step is to move from the distribution of the squared speed to the distribution of its magnitude, . The functional transformation has the form: , and the inverse for, . Thus, the inverse transformation is unique. Therefore, according to (65.1), the modulus distribution density has the form

The last step is to move from the random variable to a new random variable

The inverse transformation is unambiguous, therefore the probability density of the random variable, according to (65.1), takes the form

which coincides with formula (68.1).

Relation (68.5), which determines the relationship between the relative and absolute velocities and, follows from the third position of the ideal gas model, which is a purely physical condition, in contrast to the first two probabilistic conditions. The third condition can be formulated as a statement regarding the value of the average kinetic energy of one molecule in the form of the equality

where is Boltzmann's constant and represents, in fact, an experimental fact. Let, where is a constant, which is further determined by condition (68.7). To find it, we determine from (68.4) the mean square of the relative velocity:

Then the average kinetic energy of the molecule, where is the mass of the molecule, and taking into account (68.7), or.

Transformations of random variables

For each random variable X determine three more quantities - centered Y, normalized V and given U. Centered random variable Y is the difference between a given random variable X and its mathematical expectation M(X), those. Y = X – M(X). Expectation of a centered random variable Y equals 0, and the variance is the variance of a given random variable: M(Y) = 0, D(Y) = D(X). Distribution function F Y(x) centered random variable Y related to the distribution function F(x) original random variable X ratio:

F Y(x) = F(x + M(X)).

The densities of these random variables satisfy the equality

f Y(x) = f(x + M(X)).

Normalized random variable V is the ratio of a given random variable X to its standard deviation, i.e. . Expectation and variance of a normalized random variable V expressed through characteristics X So:

,

Where v– coefficient of variation of the original random variable X. For the distribution function F V(x) and density f V(x) normalized random variable V we have:

Where F(x) – distribution function of the original random variable X, A f(x) – its probability density.

Reduced random variable U is a centered and normalized random variable:

.

For the given random variable

Normalized, centered and reduced random variables are constantly used both in theoretical studies and in algorithms, software products, regulatory, technical and instructional documentation. In particular, because the equalities make it possible to simplify the justification of methods, the formulation of theorems and calculation formulas.

Transformations of random variables and more general ones are used. So, if Y = aX + b, Where a And b– some numbers, then

Example 7. If then Y is the reduced random variable, and formulas (8) transform into formulas (7).

With each random variable X you can associate many random variables Y, given by the formula Y = aX + b at different a> 0 and b. This set is called scale-shift family, generated by the random variable X. Distribution functions F Y(x) constitute a scale-shift family of distributions generated by the distribution function F(x). Instead of Y = aX + b often use recording

Number With is called the shift parameter, and the number d- scale parameter. Formula (9) shows that X– the result of measuring a certain quantity – goes into U– the result of measuring the same quantity if the beginning of the measurement is moved to the point With, and then use the new unit of measurement, in d times larger than the old one.

For the scale-shift family (9), the distribution of X is called standard. In probabilistic statistical methods of decision making and other applied research, the standard normal distribution, standard Weibull-Gnedenko distribution, standard gamma distribution, etc. are used (see below).

Other transformations of random variables are also used. For example, for a positive random variable X are considering Y= log X, where lg X– decimal logarithm of a number X. Chain of equalities

F Y (x) = P( lg X< x) = P(X < 10x) = F( 10x)

connects distribution functions X And Y.

The task of establishing the law of distribution of a function of random variables according to a given law of distribution of arguments is the main one. The general scheme of reasoning here is as follows. Let be the distribution law. Then we obviously have where is the complete inverse image of the half-interval, i.e. the set of those values ​​of the vector £ from the ZG for which. The last probability can be easily found, since the law of distribution of random variables £ is known. Similarly, in principle, the law of distribution of the vector function of random arguments can be found. The complexity of the implementation of the circuit depends only on the specific type of function (p and the distribution law of the arguments. This chapter is devoted to the implementation of the circuit in specific situations that are important for applications. §1. Functions of one variable Let £ be a random variable, the distribution law of which is given by the distribution function F( (x), rj = If F4(y) is the distribution function of the random variable rj, then the above considerations give FUNCTIONS OF RANDOM VARIABLES where y) denotes the complete inverse image of the half-line (-oo, y).Relation (I) is an obvious consequence of ( *) and for the case under consideration is illustrated in Fig. 1. Monotonic transformation of a random variable Let (p(t) be a continuous monotonic function (for definiteness, monotonically non-increasing) and r) = - For the distribution function Fn(y) we obtain (here is the function , the inverse to the existence of which is ensured by monotonicity and continuity. For monotonically non-decreasing) similar calculations give In particular, if - is linear, then for a > O (Fig. 2) Linear transformations do not change the nature of the distribution, but only affect its parameters. Linear transformation of a random variable uniform on [a, b] Let Linear transformation of a normal random variable Let and in general if Let, for example, 0. From (4) we conclude that Put in the last integral This replacement gives an important identity, which is the source of many interesting applications , can be obtained from relation (3) with Lemma. If is a random variable with a continuous distribution function F^(x), then the random variable r) = is uniform on . We have - monotonically does not decrease and is contained within the limits o Therefore, FUNCTIONS OF RANDOM VARIABLES On the interval we obtain One of the possible ways of using the proven lemma is, for example, the procedure for modeling a random variable with an arbitrary distribution law F((x). As follows from the lemma, for this it is enough to be able to obtain values ​​of uniform on )

Share with friends or save for yourself:

Loading...