<<
>>

A.3 Conditional probabilities for entries, exits, and changes of type

A.3.1 Transition probabilities

We follow Costantini and Garibaldi (1989) in this section to examine the con­ditional probability of the next sample type, given data (that is, the entry prob­ability and exit probability), and describe the concept of relevance coefficients.

Given that there are K types of exchangeable agents, the probability that the next observed or sampled agent is of type j is denoted by

where the X's are exchangeable random variables that denote the types of the sampled agents, j = 1, 2,..., K. Because the agents are exchangeable, the vector n = (n1, n2,..., nκ), with ni the number of agents of type i in the sample, can serve as a state vector.

More formally, define the random variable to be the indicator random variable of the event that the i th observation is of type k, and define

Then,

This last statement is written more compactly as

1

[I] owe this to Costantini (private communication).

where the normalizing constant is B = n!∕β[n].

A.3.2 Transition rates

A.4 Holding times and skeletal Markov chains the transition function. Because of the assumed time homogeneity, it depends on t but not on 5.

Let to, t1,... betheinstantsoftransitionsfortheprocess Y,andlet X 0, X1,... be the sucession of states visited by Y. Suppose Xn = i. The time interval [tn, tn+1) is the called the holding or sojourn interval, and Wn = tn+1 — tn the sojourn (waiting ) time, i.e.,

It is known that its solution is exponential, i.e.,

for some nonnegative qi. See Breiman (1969, Theorem 15.28), for example.

State i is called absorbing if qi = 0, stable if qi is finite and positive. We do not consider processes with states for which qi is ∞. (Such states are called instantaneous states.)

Jump Markov processes have the strong Markov property Norris (1997, p. 93).

Denote the right-hand side by g(Yτ) = g(Xn), where the strong Markov prop­erty is used. The function g is defined by

where

use the name in the physics literature, and call it the master equation. In the probability literature it is known as the backward (Chapman-)Kolmogorov equation.

We define a matrix A bywith

foιclass="lazyload" data-src="/files/uch_group31/uch_pgroup304/uch_uch7235/image/image783.jpg">and

This matrix is called a generator. Then we can write the above differential equation compactly as

A.4.1 Sojourn-time models

Suppose we have a model Y and Y0 = i. How long does the model state stay at i, and where does it jump to next? Each state has associated with it the exponential waiting random variable. Lawler (1995, p. 56) uses an analogy with alarm clocks. The process jumps to the state whose alarm clock rings first. Let T = min{T1, T2,..., Tκ }, where Tj is the independent waiting time in state j for the alarm clock to ring, and where K is the number of states of the state space for Y. The random variable T is also exponentially distributed, because

<< | >>
Source: Aoki M.. Modeling Aggregate Behaviour & Fluctuations in Economics. Cambridge: Cambridge University Press,2002. — 281 p.. 2002

More on the topic A.3 Conditional probabilities for entries, exits, and changes of type: