<<
>>

Extreme distributions and Gibbs distributions

In the last part of the previous section, we calculated the probability that one choice has higher present value than the alternative. In this section, we pursue this further.

Suppose that we calculate the probability that the discounted present value V1 is higher than the value V2, associated with alternative choices 1 and 2 respectively Suppose further that we represent some of the incompleteness and impreciseness of information or uncertainty of consequences surrounding the value calculation by adding random terms to the present values:

and

One interpretation is that these ∈ s are noises to represent inevitable fluctua­tions in the present values.

A second interpretation is to think of them as (additional) evidence to support a particular choice. Other interpretations are certainly possible. For example, McFadden (1973) speaks of common or com­munity preference and individual deviations from the common norm in the context of utility maximization.

One quick assumption to obtain a Gibbs distribution expression in the case of two alternative choices is to assume that ∈ = e2 - ∈ι is distributed according to

for some positive β. With this distribution, a larger value of ∈ supports more strongly the possibility that V1 > V2. The parameter β controls to what extent changes in x translate into changes in probabilities. With a smaller value of β, a larger increase in x — that is, in evidence - is needed to increase the probability that favors choice 1, for example.

The larger β is, the smaller is the increase in x needed to change the probability by a given amount.

With this distribution, then, immediately we obtain

with g = (V1 - V2)/2. We obtain also P2 = 1 - P1, of course.

To reiterate an important point, we see that a smaller value of β implies a smaller difference of | P1 - P21. Namely, with a large value of β, one of the alternatives tends to dominate.

In the next subsection, this type of approach, which involves explicit calcu­lations of probabilities of relative sizes of present values, is developed further.

For the third way, we call attention to the conditional limit theorem; see Aoki (1996a, Secs. 3.7, 3.8).

6.3.1 Type I: Extreme distribution

McFadden models agents' discrete choice as the maximization of utilities, i.e., by deriving the distribution for the maximum of Uj, j = 1, 2,..., K, where Uj is associated with choice j, and K is the total number of available choices. Assume that the observed utility is corrupted by noise,

and we are really interested in picking the maximum of V 's, not U 's.

Define

It is known that Pi has the form of a Gibbs distribution, i.e., the exponential form, when the noise is distributed according to

This distribution may look very exotic. However, it is not so. In the literature on extreme distributions, this distribution is known as the type I extreme dis­tribution. This distribution has nonzero domain of attraction, and conditions under which distributions converge weakly to this distribution are known.

See Galambos (1987, Chaps. 2, 3) or Leadbetter et al. (1993) for example.

This distribution arises as follows: Suppose the ∈s are i.i.d. with distribution function F. Then we know that the maximum of a large number n of samples is distributed as

Taking the logarithm and writingwe see that if

for some constant positive an and bn, then

See Galambos (1987, p. 11).

with

6.4

<< | >>
Source: Aoki M.. Modeling Aggregate Behaviour & Fluctuations in Economics. Cambridge: Cambridge University Press,2002. — 281 p.. 2002

More on the topic Extreme distributions and Gibbs distributions: