Markov processes examples. Elements of queuing theory

The evolution of which after any given value of the time parameter t (\displaystyle t) does not depend from the evolution that preceded t (\displaystyle t), provided that the value of the process at this moment is fixed (“the future” of the process does not depend on the “past” with the known “present”; another interpretation (Wentzel): the “future” of the process depends on the “past” only through the “present”).

Encyclopedic YouTube

    1 / 3

    ✪ Lecture 15: Markov Stochastic Processes

    ✪ Origin of Markov chains

    ✪ Generalized Markov process model

    Subtitles

Story

The property that defines a Markov process is usually called a Markov property; for the first time it was formulated by A. A. Markov, who in the works of 1907 laid the foundation for the study of sequences of dependent trials and the sums of random variables associated with them. This line of research is known as the theory of Markov chains.

The foundations of the general theory of Markov processes with continuous time were laid by Kolmogorov.

Markov property

General case

Let be (Ω , F , P) (\displaystyle (\Omega ,(\mathcal (F)),\mathbb (P)))- probability space with filtering (F t , t ∈ T) (\displaystyle ((\mathcal (F))_(t),\ t\in T)) over some (partially ordered) set T (\displaystyle T); let it go (S , S) (\displaystyle (S,(\mathcal (S))))- measurable space. random process X = (X t , t ∈ T) (\displaystyle X=(X_(t),\ t\in T)), defined on the filtered probability space, is considered to satisfy Markov property if for each A ∈ S (\displaystyle A\in (\mathcal (S))) and s , t ∈ T: s< t {\displaystyle s,t\in T:s,

P (X t ∈ A | F s) = P (X t ∈ A | X s) . (\displaystyle \mathbb (P) (X_(t)\in A|(\mathcal (F))_(s))=\mathbb (P) (X_(t)\in A|X_(s)). )

Markov process is a random process that satisfies Markov property with natural filtration.

For Markov chains with discrete time

If S (\displaystyle S) is a discrete set and T = N (\displaystyle T=\mathbb (N) ), the definition can be reformulated:

P (X n = x n | X n - 1 = x n - 1 , X n - 2 = x n - 2 , ... , X 0 = x 0) = P (X n = x n | X n - 1 = x n - 1) (\displaystyle \mathbb (P) (X_(n)=x_(n)|X_(n-1)=x_(n-1),X_(n-2)=x_(n-2),\dots , X_(0)=x_(0))=\mathbb (P) (X_(n)=x_(n)|X_(n-1)=x_(n-1))).

An example of a Markov process

Consider a simple example of a Markov stochastic process. A point moves randomly along the x-axis. At time zero, the point is at the origin and remains there for one second. A second later, a coin is thrown - if the coat of arms fell out, then the point X moves one unit of length to the right, if the number - to the left. A second later, the coin is tossed again and the same random movement is made, and so on. The process of changing the position of a point ("wandering") is a random process with discrete time (t=0, 1, 2, ...) and a countable set of states. Such a random process is called Markovian, since the next state of the point depends only on the present (current) state and does not depend on past states (it does not matter which way and for what time the point got to the current coordinate).

Markov random processes are named after the outstanding Russian mathematician A.A. Markov (1856-1922), who first began the study of the probabilistic connection of random variables and created a theory that can be called "probability dynamics". In the future, the foundations of this theory became the initial basis for the general theory of random processes, as well as such important applied sciences as the theory of diffusion processes, reliability theory, queuing theory, etc. At present, the theory of Markov processes and its applications are widely used in various fields of sciences such as mechanics, physics, chemistry, etc.

Due to the comparative simplicity and clarity of the mathematical apparatus, high reliability and accuracy of the solutions obtained, Markov processes have received special attention from specialists involved in operations research and the theory of optimal decision making.

Despite the above simplicity and clarity, the practical application of the theory of Markov chains requires knowledge of some terms and basic provisions, which should be discussed before presenting examples.

As mentioned, Markov stochastic processes are special cases of stochastic processes (SP). In turn, random processes are based on the concept of a random function (SF).

A random function is a function whose value for any value of the argument is a random variable (CV). In other words, the SF can be called a function that, at each test, takes some previously unknown form.

Such examples of SF are: voltage fluctuations in an electrical circuit, the speed of a car on a section of a road with a speed limit, the surface roughness of a part in a certain section, etc.

As a rule, it is believed that if the SF argument is time, then such a process is called random. There is another, closer to the theory of decision making, definition of random processes. At the same time, a random process is understood as the process of a random change in the states of any physical or technical system in time or some other argument.

It is easy to see that if we designate a state and depict a dependence, then such a dependence will be a random function.

Random processes are classified according to the types of states and the argument t. In this case, random processes can be with discrete or continuous states or time.

In addition to the above examples of the classification of random processes, there is another important property. This property describes the probabilistic relationship between the states of random processes. So, for example, if in a random process the probability of a system transition to each subsequent state depends only on the previous state, then such a process is called a process without aftereffect.

Note, firstly, that a random process with discrete states and time is called a random sequence.

If a random sequence has the Markov property, then it is called a Markov chain.

On the other hand, if in a random process the states are discrete, time is continuous, and the aftereffect property is preserved, then such a random process is called a Markov process with continuous time.

A Markov stochastic process is called homogeneous if the transition probabilities remain constant during the process.

A Markov chain is considered given if two conditions are given.

1. There is a set of transition probabilities in the form of a matrix:

2. There is a vector of initial probabilities

describing the initial state of the system.

In addition to the matrix form, the Markov chain model can be represented as a directed weighted graph (Fig. 1).

Rice. one

The set of states of the Markov chain system is classified in a certain way, taking into account the further behavior of the system.

1. Irreversible set (Fig. 2).

Fig.2.

In the case of a non-returnable set, any transitions within this set are possible. The system can leave this set, but cannot return to it.

2. Recurrent set (Fig. 3).

Rice. 3.

In this case, any transitions within the set are also possible. The system can enter this set, but cannot leave it.

3. Ergodic set (Fig. 4).

Rice. 4.

In the case of an ergodic set, any transitions within the set are possible, but transitions from and to the set are excluded.

4. Absorbing set (Fig. 5)

Rice. 5.

When the system enters this set, the process ends.

In some cases, despite the randomness of the process, it is possible to a certain extent to control the laws of distribution or the parameters of transition probabilities. Such Markov chains are called controlled. Obviously, with the help of controlled Markov chains (MCC), the decision-making process becomes especially effective, which will be discussed later.

The main feature of a discrete Markov chain (DMC) is the determinism of time intervals between individual steps (stages) of the process. However, this property is often not observed in real processes, and the intervals turn out to be random with some distribution law, although the process remains Markovian. Such random sequences are called semi-Markov.

In addition, taking into account the presence and absence of certain sets of states mentioned above, Markov chains can be absorbing if there is at least one absorbing state, or ergodic if the transition probabilities form an ergodic set. In turn, ergodic chains can be regular or cyclic. Cyclic chains differ from regular chains in that in the process of transitions through a certain number of steps (cycles) there is a return to some state. Regular chains do not have this property.

Structure and classification of queuing systems

Queuing systems

Often there is a need to solve probabilistic problems associated with queuing systems (QS), examples of which can be:

Ticket offices;

Repair shops;

Trade, transport, energy systems;

Communication systems;

The commonality of such systems is revealed in the unity of mathematical methods and models used in the study of their activities.

Rice. 4.1. The main areas of application of TMT

The input to the QS receives a stream of service requests. For example, customers or patients, equipment breakdowns, phone calls. Requests arrive irregularly, at random times. The duration of the service is also random. This creates irregularities in the work of the QS, causes its overloads and underloads.

Queuing systems have a different structure, but usually they can be distinguished four main elements:

1. Incoming demand flow.

2. Accumulator (queue).

3. Devices (service channels).

4. Output flow.

Rice. 4.2. General scheme of queuing systems

Rice. 4.3. System operation model

(arrows show the moments of arrival of requirements in

system, rectangles - service time)

Figure 4.3a shows a model of the system with a regular flow of requirements. Since the interval between arrivals of claims is known, the service time is chosen so as to fully load the system. For a system with a stochastic flow of requirements, the situation is completely different - the requirements come at different points in time and the service time is also a random variable, which can be described by a certain distribution law (Fig. 4.3 b).

Depending on the rules for the formation of the queue, the following QS are distinguished:

1) systems with failures , in which, when all service channels are busy, the request leaves the system unserved;

2) systems with unlimited queue , in which the request enters the queue if at the time of its arrival all service channels were busy;

3) systems with waiting and limited queue , in which the waiting time is limited by some conditions or there are restrictions on the number of applications standing in the queue.

Consider the characteristics of the incoming flow of requirements.

The flow of requests is called stationary , if the probability of hitting one or another number of events in a time segment of a certain length depends only on the length of this segment.

The stream of events is called flow without consequences , if the number of events falling on a certain time interval does not depend on the number of events falling on others.



The stream of events is called ordinary if two or more events cannot occur simultaneously.

The flow of requests is called Poisson (or the simplest) if it has three properties: stationary, ordinary and has no consequences. The name is due to the fact that under the specified conditions, the number of events falling on any fixed time interval will be distributed according to the Poisson law.

intensity flow of applications λ is the average number of applications coming from the flow per unit of time.

For a stationary flow, the intensity is constant. If τ is the average value of the time interval between two adjacent requests, then In the case of a Poisson flow, the probability of entering the service m requests for a period of time t is determined by Poisson's law:

The time between adjacent requests is distributed exponentially with a probability density

The service time is a random variable and obeys an exponential distribution law with a probability density where μ is the intensity of the service flow, i.e. the average number of requests served per unit of time,

The ratio of the intensity of the incoming flow to the intensity of the service flow is called system boot

A queuing system is a discrete type system with a finite or countable set of states, and the transition of the system from one state to another occurs abruptly when some event occurs.

The process is called discrete state process , if its possible states can be renumbered in advance, and the transition of the system from state to state occurs almost instantly.

Such processes are of two types: with discrete or continuous time.

In the case of discrete time, transitions from state to state can occur at strictly defined instants of time. Processes with continuous time differ in that the transition of the system to a new state is possible at any time.

A random process is a correspondence in which each value of the argument (in this case, a moment from the time interval of the experiment) is assigned a random variable (in this case, the QS state). Random variable a quantity is called that, as a result of experience, can take one, but unknown in advance which, numerical value from a given numerical set.

Therefore, to solve the problems of queuing theory, it is necessary to study this random process, i.e. build and analyze its mathematical model.

random process called Markovian , if for any moment of time the probabilistic characteristics of the process in the future depend only on its state at the moment and do not depend on when and how the system came to this state.

The transitions of the system from state to state occur under the influence of some flows (the flow of applications, the flow of failures). If all flows of events that bring the system to a new state are the simplest Poisson, then the process occurring in the system will be Markovian, since the simplest flow does not have a consequence: in it the future does not depend on the past. - a group of chess pieces. The state of the system is characterized by the number of opponent's pieces remaining on the board at the moment . The probability that at the moment the material advantage will be on the side of one of the opponents depends primarily on the state of the system at the moment , and not on when and in what sequence the pieces disappeared from the board until the moment .

Lecture 9

Markov processes
Lecture 9
Markov processes



1

Markov processes

Markov processes
The random process in the system is called
Markovian if it has no consequence. Those.
if we consider the current state of the process (t 0) - as
present, set of possible states ( (s),s t) - as
past, set of possible states ( (u),u t) - as
future, then for a Markov process with a fixed
present, the future does not depend on the past, but is determined
only present and does not depend on when and how the system
came to this state.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
2

Markov processes

Markov processes
Markov random processes are named after the outstanding Russian mathematician A.A. Markov, who first began the study of the probabilistic relationship of random variables
and created a theory that can be called "dynamics
probabilities." In the future, the foundations of this theory were
the initial base of the general theory of random processes, as well as such important applied sciences as the theory of diffusion processes, reliability theory, queuing theory, etc.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
3

Markov Andrey Andreevich Markov Andrey Andreevich Markov Andrey Andreevich

Markov processes
Markov Andrey Andreevich
1856-1922
Russian mathematician.
Wrote about 70 papers on
theories
numbers,
theories
approximations of functions, theories
probabilities. Significantly expanded the scope of the law
large numbers and central
limit theorem. Is an
founder of the theory of random processes.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
4

Markov processes

Markov processes
In practice, pure Markov processes are usually
do not meet. But there are processes for which the influence of "prehistory" can be neglected, and when studying
such processes, Markov models can be applied. AT
At present, the theory of Markov processes and its applications are widely used in various fields.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
5

Markov processes

Markov processes
Biology: birth and death processes - populations, mutations,
epidemics.
Physics:
radioactive
decays,
theory
counters
elementary particles, diffusion processes.
Chemistry:
theory
traces
in
nuclear
photographic emulsions,
probabilistic models of chemical kinetics.
Images.jpg
Astronomy: fluctuation theory
brightness of the milky way.
Queuing theory: telephone exchanges,
repair shops, ticket offices, information desks,
machine tool and other technological systems, control systems
flexible production systems, information processing by servers.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
6

Markov processes

Markov processes
Let at the present moment t0 the system be in
certain state S0. We know the characteristics
state of the system in the present and everything that was at t< t0
(history of the process). Can we predict the future
those. what happens when t > t0?
Not exactly, but some probabilistic characteristics
process in the future can be found. For example, the probability that
that after a while
system S will be in the state
S1 or stay in state S0, etc.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
7

Markov processes. Example.

Markov processes
Markov processes. Example.
System S - a group of aircraft involved in air combat. Let x be the number
"red" planes, y is the number of "blue" planes. By the time t0, the number of surviving (not shot down) aircraft
respectively – x0, y0.
We are interested in the probability that at time
t 0 numerical superiority will be on the side of the “reds”. This probability depends on what state the system was in.
at time t0, and not on when and in what sequence the aircraft shot down up to time t0 were killed.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
8

Discrete Markov Chains

Markov processes
Discrete Markov Chains
Markov process with finite or countable number
states and moments of time is called discrete
Markov chain. Transitions from state to state are possible only at integer times.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
9

10. Discrete Markov chains. Example

Markov processes

Suppose
what
speech
goes
about
successive coin tosses
game "toss"; the coin is thrown into
conditional moments of time t =0, 1, ... and on
each step the player can win ±1 s
the same
probability
1/2,
so
Thus, at the moment t, his total gain is a random variable ξ(t) with possible values ​​j = 0, ±1, ... .
Provided that ξ(t) = k, at the next step the payoff will be
is already equal to ξ(t+1) = k ± 1, taking the values ​​j = k ± 1 with the same probability 1/2. We can say that here, with an appropriate probability, there is a transition from the state ξ(t) = k to the state ξ(t + 1) = k ± 1.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
10

11. Discrete Markov Chains

Markov processes
Discrete Markov Chains
Generalizing this example, one can imagine a system with
countable number of possible states, which over time
discrete time t = 0, 1, ... randomly passes from state to state.
Let ξ(t) be its position at time t as a result of a chain of random transitions
ξ(0) -> ξ(1) -> ... -> ξ(t) -> ξ(t+1) ->...-> ... .
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
11

12. Discrete Markov Chains

Markov processes
Discrete Markov Chains
When analyzing random processes with discrete states, it is convenient to use a geometric scheme - a graph
states. The vertices of the graph are the states of the system. Count Arcs
– possible transitions from state to state.
The game "toss".
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
12

13. Discrete Markov Chains

Markov processes
Discrete Markov Chains
Denote all possible states by integers i = 0, ±1, ...
Assume that with a known state ξ(t) =i, at the next step, the system passes into the state ξ(t+1) = j with conditional probability
P( (t 1) j (t) i)
regardless of her behavior in the past, more precisely, regardless of
from the chain of transitions to the moment t:
P( (t 1) j (t) i; (t 1) it 1;...; (0) i0 )
P( (t 1) j (t) i)
This property is called Markovian.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
13

14. Discrete Markov Chains

Markov processes
Discrete Markov Chains
Number
pij P( (t 1) j (t) i)
called probability
transition of the system from state i to state j in one step in
point in time t1.
If the transition probability does not depend on t, then the chain
Markov is called homogeneous.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
14

15. Discrete Markov Chains

Markov processes
Discrete Markov Chains
Matrix P , whose elements are probabilities
transition pij , is called the transition matrix:
p11 ... p1n
P p 21 ... p 2n
p
n1 ... pnn
It is stochastic, i.e.
pij 1 ;
i
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
p ij 0 .
15

16. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
The transition matrix for the game "toss"
...
k2
k2
0
k 1
1/ 2
k
0
k 1
k
k 1
k2
0
1/ 2
0
0
1/ 2
0
1/ 2
0
1/ 2
0
0
0
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
...
k 1 k 2
0
0
0
1/ 2
0
1/ 2
...
0
0
1/ 2
0
16

17. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
The gardener as a result of a chemical analysis of the soil evaluates
her condition with one of three numbers - good (1), fair (2) or bad (3). As a result of observations over the years, the gardener noticed
that soil productivity in the current
year depends only on its condition in
previous year. Therefore, the probabilities
soil transition from one state to
another can be represented by the following
Markov chain with matrix P1:
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
17

18. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
However, as a result of agrotechnical measures, the gardener can change the transition probabilities in the matrix P1.
Then the matrix P1 will be replaced
to matrix P2:
0.30 0.60 0.10
0.10 0.60 0.30
0.05 0.40 0.55
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
18

19. Discrete Markov Chains

Markov processes
Discrete Markov Chains
Consider how process states change over time. We will consider the process at successive moments of time, starting from moment 0. Let us set the initial probability distribution p(0) ( p1 (0),..., pm (0)) , where m is the number of process states, pi (0) is the probability of finding
process in state i at the initial time. The probability pi (n) is called the unconditional probability of the state
i at time n 1.
The components of the vector p(n) show which of the possible states of the circuit at time n are the most
probable.
m
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
pk (n) 1
k 1
19

20. Discrete Markov Chains

Markov processes
Discrete Markov Chains
Knowing the sequence ( p (n)) for n 1,... allows you to get an idea about the behavior of the system in time.
In a 3-state system
p11 p12 p13
P p21
p
31
p22
p32
p23
p33
p2 (1) p1 (0) p12 p2 (0) p22 p3 (0) p32
p2 (n 1) p1 (n) p12 p2 (n) p22 p3 (n) p32
In general:
p j (1) pk (0) pkj
p j (n 1) pk (n) pkj
k
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
k
p(n 1) p(n) P
20

21. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
Matrix
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
Step
(p(n))
n
0
1, 0, 0
n
1
0.2 , 0.5 , 0.3
n
2
0.04 , 0.35 , 0.61
n
3
0.008 , 0.195 , 0.797
n
4
0.0016 , 0.1015 , 0.8969
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
21

22. Discrete Markov Chains

Markov processes
Discrete Markov Chains
n
Transition matrix in n steps P(n) P .
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
p(2) p(0) P
2
p(2)
P(2) P2
1, 0, 0
0.0016
0.
0.
0.0016
0.
0.
0.1015
0.0625
0.
0.1015
0.0625
0.
0.8969
0.9375
1.
0.8969
0.9375
1.
0.04 , 0.35 , 0.61
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
22

23. Discrete Markov Chains

Markov processes
Discrete Markov Chains
How do Markov chains behave for n ?
For a homogeneous Markov chain, under certain conditions, the following property holds: p (n) for n.
Probabilities 0 do not depend on the initial distribution
p(0) , but are determined only by the matrix P . In this case, it is called a stationary distribution, and the chain itself is called ergodic. The property of ergodicity means that as n increases
the probability of states practically ceases to change, and the system goes into a stable mode of operation.
i
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
23

24. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
0 0 1
P() 0 0 1
0 0 1
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
p()(0,0,1)
24

25. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
0.30 0.60 0.10
0.10 0.60 0.30
0.05 0.40 0.55
0.1017 0.5254 0.3729
P() 0.1017 0.5254 0.3729
0.1017 0.5254 0.3729
p()(0.1017,0.5254,0.3729)
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
25

26. Markov processes with continuous time

Markov processes

A process is called a continuous time process if
the moments of possible transitions from state to state are not fixed in advance, but are uncertain, random and can occur
in some moment.
Example. The technological system S consists of two devices,
each of which at a random moment of time can exit from
building, after which the repair of the node immediately begins, also continuing for an unknown, random time.
The following system states are possible:
S0 - both devices are working;
S1 - the first device is being repaired, the second one is working properly;
S2 - the second device is being repaired, the first one is working properly;
S3 - both devices are being repaired.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
26

27. Markov processes with continuous time

Markov processes
Markov processes with continuous time
Transitions of the system S from state to state occur
almost instantly, at random moments of failure
any device or
end of repair.
The probability of simultaneous
failure of both devices
can be neglected.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
27

28. Event Streams

Markov processes
Event streams
A stream of events is a sequence of homogeneous events following one after another at some random points in time.
is the average number of events
The intensity of the flow of events
per unit of time.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
28

29. Event Streams

Markov processes
Event streams
A stream of events is called stationary if its probabilistic characteristics do not depend on time.
In particular, the intensity
stationary flow is constant. The flow of events inevitably has concentrations or rarefaction, but they are not of a regular nature, and the average number of events per unit of time is constant and does not depend on time.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
29

30. Event Streams

Markov processes
Event streams
A stream of events is called a stream without consequences if for
any two non-overlapping time segments and the number of events falling on one of them does not depend on how many events fell on the other. In other words, this means that the events that form the stream appear at certain moments.
time independently of each other and each caused by its own causes.
A stream of events is called ordinary if the probability of occurrence of two or more events in an elementary interval t is negligibly small compared to the probability of occurrence of one
events, i.e. events in it appear one by one, and not in groups of several at once
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
30

31. Event Streams

Markov processes
Event streams
A stream of events is called the simplest (or stationary Poisson) if it has three properties at once: 1) it is stationary, 2) it is ordinary, 3) it has no consequences.
The simplest flow has the simplest mathematical description. He plays among the streams the same special
role, like the law of normal distribution among others
distribution laws. Namely, when a sufficiently large number of independent, stationary and ordinary
flows (comparable to each other in intensity), a flow close to the simplest is obtained.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
31

32. Event Streams

Markov processes
Event streams
For the simplest flow with intensity
interval
time T between adjacent events has an exponential
distribution with density
p(x) e x , x 0 .
For a random variable T with an exponential distribution, the mathematical expectation is the reciprocal of the parameter.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
32

33. Markov processes with continuous time

Markov processes
Markov processes with continuous time
Considering processes with discrete states and continuous time, we can assume that all transitions of the system S from state to state occur under the action of
the simplest event flows (call flows, failure flows, recovery flows, etc.).
If all flows of events that transfer system S from state to state are the simplest, then the process occurring in
system, will be Markovian.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
33

34. Markov processes with continuous time

Markov processes
Markov processes with continuous time
Let the system in the state be affected by
the simplest flow of events. As soon as the first event of this flow appears, the system “jumps” from the state
into a state.
- the intensity of the flow of events, translating the system
out of state
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
in
.
34

35. Markov Processes with Continuous Time

Markov processes
Markov processes with continuous time
Let the system S under consideration have
possible states
. The probability p ij (t) is the probability of transition from state i to state j in time t.
Probability of the i -th state
is the probability that
that at time t the system will be in the state
. It is obvious that for any moment of time the sum
of all state probabilities is equal to one:
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
35

36. Markov Processes with Continuous Time

Markov processes
Markov processes with continuous time
To find all state probabilities
as
functions of time, differential equations of Kolmogorov are compiled and solved - a special type of equation in which the unknown functions are the probabilities of states.
For transition probabilities:
p ij (t) p ik (t) kj
k
For unconditional probabilities:
p j (t) p k (t) kj
k
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
36

37. Kolmogorov Andrey Nikolaevich

Markov processes
Kolmogorov Andrei Nikolaevich
1903-1987
Great Russian
mathematician.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
37

38. Markov Processes with Continuous Time

Markov processes
Markov processes with continuous time
- failure rate;
- the intensity of the recovery flow.
Let the system be in the state
S0. It is transferred to state S1 by the flow
failure of the first device. Its intensity is
where
- Mean time of non-failure operation of the device.
From state S1 to S0, the system is transferred by the flow of restorations
first device. Its intensity is
where
- average repair time of the first machine.
Similarly, the intensities of the flows of events that transfer the system along all graph arcs are calculated.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
38

39. Queuing systems

Markov processes

Examples of queuing systems (QS): telephone exchanges, repair shops,
ticket
cash desks,
reference
the Bureau,
machine tool and other technological systems,
systems
management
flexible
production systems,
information processing by servers, etc.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
39

40. Queuing systems

Markov processes
Queuing systems
QS consists of a certain number of serving
units, which are called service channels (these are
machines, robots, communication lines, cashiers, etc.). Any CMO
is designed to service the flow of applications (requirements) arriving at random times.
The service of the request continues for a random time, after which the channel is released and is ready to receive the next one.
applications.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
40

41. Queuing systems

Markov processes
Queuing systems
The QS operation process is a random process with discrete
states and continuous time. The state of the QS changes abruptly at the moments of the appearance of some events
(arrival of a new application, end of service, moment,
when the application, which is tired of waiting, leaves the queue).
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
41

42. Queuing systems

Markov processes
Queuing systems
Classification of queuing systems
1. QS with failures;
2. CMO with a queue.
In a QS with denials, a claim that arrives at the moment when all channels are busy receives a refusal, leaves the QS, and is no longer
serviced.
In a QS with a queue, a claim that arrives at the moment when all channels are busy does not leave, but enters the queue and waits for the opportunity to be served.
QS with queues are divided into different types depending on
on how the queue is organized - limited or not limited. Restrictions can apply to both queue length and time
expectations, "service discipline".
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
42

43. Queuing systems

Markov processes
Queuing systems
The subject of queuing theory is the construction
mathematical models linking given conditions
QS operation (number of channels, their performance, rules
work, the nature of the flow of applications) with the characteristics of interest to us - indicators of the effectiveness of the QS. These indicators describe the ability of the QS to cope with the flow
applications. They can be: the average number of applications served by the QS per unit of time; average number of busy channels; the average number of applications in the queue; average waiting time for service, etc.
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
43

44.

THANK YOU
FOR ATTENTION!!!
44

45. Build a transition graph

Markov processes
Build a transition graph
0.30
0.70
0.0
0.10
0.60
0.30
0.50
0.50
0.0
KHNURE, department. PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"

Under random process understand the change in time of the states of some physical system in a previously unknown random way. Wherein by a physical system we mean any technical device, group of devices, enterprise, industry, biological system, etc.

random process flowing in the system is called Markovsky – if for any moment of time , the probabilistic characteristics of the process in future (t > ) depend only on its state at a given time ( present ) and do not depend on when and how the system came to this state in past .(For example, a Geiger counter that registers the number of cosmic particles).

Markov processes are usually divided into 3 types:

1. Markov chain – a process whose states are discrete (i.e., they can be renumbered), and the time by which it is considered is also discrete (i.e., the process can change its states only at certain points in time). Such a process goes (changes) in steps (in other words, in cycles).

2. Discrete Markov process - the set of states is discrete (can be enumerated), and time is continuous (transition from one state to another - at any time).

3. Continuous Markov Process – the set of states and time are continuous.

In practice, Markov processes in their pure form are not often encountered. However, one often has to deal with processes for which the influence of prehistory can be neglected. In addition, if all the parameters from the "past" on which the "future" depends are included in the state of the system in the "present", then it can also be considered as Markovian. However, this often leads to a significant increase in the number of variables taken into account and the impossibility of obtaining a solution to the problem.

In operations research, the so-called Markov stochastic processes with discrete states and continuous time.

The process is called discrete state process, if all its possible states , ,... can be enumerated (renumbered) in advance. The transition of the system from state to state passes almost instantly - jump.

The process is called continuous time process, if the moments of transition from state to state can take any random values ​​on the time axis.

for example : Technical device S consists of two nodes , each of which at a random moment of time can fail ( refuse). After that, node repair starts immediately ( recovery) that continues for a random time.

The following system states are possible:

Both nodes are OK;

The first node is being repaired, the second is working.


- the second node is being repaired, the first one is working

Both nodes are being repaired.

The transition of the system from state to state occurs at random times almost instantly. It is convenient to display the system states and the relationship between them using state graph .

states


Transitions

Transitions and are absent because failures and recovery of elements occur independently and randomly, and the probability of simultaneous failure (recovery) of two elements is infinitesimal and can be neglected.

If all streams of events translating the system S from state to state protozoa, then process, flowing in such a system will be Markovsky. This is due to the fact that the simplest flow does not have an aftereffect, i.e. in it, the "future" does not depend on the "past" and, in addition, it has the property of ordinaryness - the probability of the simultaneous occurrence of two or more events is infinitely small, i.e. it is impossible to move from state to state without passing several intermediate states.

For clarity, on the state graph, it is convenient to put down the intensity of the flow of events that transfers the system from state to state along the given arrow at each transition arrow ( - the intensity of the flow of events that transfers the system from the state in. Such a graph is called marked up.

Using the labeled graph of system states, it is possible to build a mathematical model of this process.

Consider the transitions of the system from some state to the previous or next. A fragment of the state graph in this case will look like this:

Let the system at the time t is in a state of .

Denote (t)- probability of the i-th state of the system is the probability that the system at time t is in a state of . For any moment of time t =1 is true.

Let us determine the probability that at the moment of time t+∆t the system will be in the state. This may be in the following cases:

1) and did not leave it during the time ∆ t. This means that during the time ∆t did not arise an event that brings the system into a state (flow with intensity ) or an event that puts it into a state (flow with intensity ). Let us determine the probability of this for small ∆t.

Under the exponential law of time distribution between two neighboring requirements, corresponding to the simplest flow of events, the probability that, in the time interval ∆t, no requirements will arise in the flow with intensity λ1 will be equal to

Expanding the function f(t) in a Taylor series (t>0) we get (for t=∆t)

f(∆t)=f(0)+ (0)* ∆t + *∆ + *∆ +…=

= +(-l) *∆t+ (∆ + *(∆ +…” 1-l*∆t for ∆t®0

Similarly, for a flow with intensity λ 2 we obtain .

The probability that on the time interval ∆t (for ∆t®0) no requirement will be equal to

(∆t)/ = (∆t/ * (∆t/ = (1- *∆t)(1- *∆t) =

1 - - *∆t + 1 - ( + )*∆t + b.m.

Thus, the probability that the system has not left the state during the time ∆t for small ∆t will be equal to

P( / )=1 – ( + )* ∆t

2) The system was in a state S i -1 and for time passed into the state S i . That is, at least one event has occurred in the flow with intensity. The probability of this is equal to the simplest flow with intensity λ will

For our case, the probability of such a transition will be equal to

3)The system was in a state and during the time ∆t passed into the state . The probability of this will be

Then the probability that the system at time (t+∆t) will be in state S i is equal to

Subtract P i (t) from both parts, divide by ∆t and, passing to the limit, with ∆t→0, we obtain

Substituting the corresponding values ​​of the intensities of transitions from states to states, we obtain a system of differential equations describing the change in the probabilities of the system states as functions of time.

These equations are called equations Kolmogorov-Chapman for a discrete Markov process.

Having set the initial conditions (for example, P 0 (t=0)=1,P i (t=0)=0 i≠0) and solving them, we obtain expressions for the probabilities of the system state as functions of time. Analytical solutions are fairly easy to obtain if the number of equations ≤ 2.3. If there are more of them, then the equations are usually solved numerically on a computer (for example, by the Runge-Kutta method).

In the theory of random processes proven , what if number n system states certainly and from each of them it is possible (in a finite number of steps) to go to any other, then there is a limit , to which the probabilities tend when t→ . Such probabilities are called final probabilities states, and the steady state - stationary mode functioning of the system.

Since in stationary mode everything , therefore, all =0. Equating the left parts of the system of equations with 0 and supplementing them with the equation =1, we obtain a system of linear algebraic equations, solving which we find the values ​​of the final probabilities.

Example. Let in our system the failure and restoration rates of elements are the following

Failures 1el:

2el:

Repair 1el:

2el:


P 0 + P 1 + P 2 + P 3 \u003d 1

0=-(1+2)P 0 +2P 1 +3 P 2

0=-(2+2)P 1 +1P 0 +3P 3

0=-(1+3)P 2 +2P 0 +2P 3

0=-(2+3)P 3 +2P 1 +1P 2

Solving this system, we get

P 0 =6/15=0.4; P 1 =3/15=0.2; P2=4/15=0.27; P3=2/15≈0.13.

Those. in a stationary state, the system on average

40% is in state S 0 (both nodes are healthy),

20% - in condition S 1 (the 1st element is being repaired, the 2nd is in good condition),

27% - in condition S 2 (2nd electric is being repaired, 1 is in good condition),

13% - in S 3 condition - both elements are under repair.

Knowing the final probabilities allows Evaluate the average system performance and repair service load.

Let the system in the state S 0 bring an income of 8 units. per unit of time; in the state S 1 - income 3 sr.u.; in state S 2 - income 5; in state S 3 - income \u003d 0

Price repair per unit time for el-ta 1- 1 (S 1, S 3) arb. units, el-ta 2- (S 2, S 3) 2 arb. Then in stationary mode:

System income per unit time will be:

W max =8P 0 +3P 1 +5P 2 +0P 3 =8 0.4+3 0.2+5 0.27+0 0.13=5.15 c.u.

Repair cost in units time:

W rem =0P 0 +1P 1 +2P 2 +(1+2)P 3 =0 0.4+1 0.2+2 0.27+3 0.13=1.39 c.u.

Profit per unit of time

W \u003d W doh -W rem \u003d 5.15-1.39 \u003d 3.76 units

Having spent certain expenses, it is possible to change the intensity λ and μ and, accordingly, the efficiency of the system. The feasibility of such expenses can be assessed by recalculating P i . and system performance indicators.

Share with friends or save for yourself:

Loading...