<<
>>

Value functions

Here, we outline the use of value functions in making the best choice out of K possible ones. Value functions correspond to potentials associated to discrete­time or continuous-time Markov chains.

We briefly describe the connection after some introductory comments. We observe that the appeal to value functions is useful for a small number of alternatives such as K = 2 or 3, with no uncertainty about future interest rates, perfect knowledge of externalities, and so on. With a large number of alternatives, practical implementation of the ideas behind the value functions may become problematical due to imperfect information for making decisions, let alone the computational complexities, among other things.

Suppose that K alternative choices are associated with K alternative streams of profits or utilities. These alternative streams may depend, in addition, on the state of the “world,” which may represent the summaries of aggregate choices of all agents involved. The values of these alternatives are calculated as the present values of the discounted sums of the streams of profits, utilities and the like, which render the values dependent on the state of the world. Using the risk-neutral interest rate r, the value V must satisfy, in a continuous-time formulation,

where the first term on the right is the flow of rewards or revenues and the second is the expected capital gain term. In the finance literature, V refers to the value of some financial asset. The sum of flow return and the capital-gain term must equal the yield or return to the asset on the left-hand side in order for the asset to be willingly held. In the finance literature, the underlying stochastic processes are usually taken to be some diffusion processes. Here in evaluating alternative choices using value functions, the underlying stochastic processes are (jump) Markov processes.

Strictly speaking, we don't know time profiles of r, nor of ρ. For now, we proceed with our examination assuming that r and ρ are known. The flow ρ must be replaced by its expected value or some kind of estimate if it is not a known deterministic quantity. We incorporate impreciseness and incompleteness of information later into the probability distribution functions we specify.

To each state i of the state space S having at most countably many states, the value function with infinite horizon with the initial state i is defined by

where Ei denotes the expectation with the initial condition X0 = i of a jump Markov process {Xt}, t ≥ 0. It is known that V = (Vi : i e S) is the unique bounded solution to

where W is the generator matrix of the jump process with transition rate w (i, j), that is, the matrix of transition rates, P (t) = (pij (t)), satisfies the backward equation

with P(0) = I. See Norris (1997, Sec. 4) for a readable introduction. See also Kelly (1979) and Doyle and Snell (1984) for relations of value functions to potentials and their representation as electrical networks.

Here is a simple illustration of the approach with two states, λ = 1 and λ = 2, say, without recourse to Ito calculus. Assume simply that state changes are two-state jump Markov processes, described by the transition rates w(1, 2) and w(2, 1). The first is the transition rate from state 1 to 2, and the second from 2 to 1. In a small time interval of duration ∆t, then, the probability of state 1 changes to 2 with probability w(1, 2) ∆t + o(∆t), for example.

The value function over an infinite horizon is given by

and

where the subscript refers to the state. For example, ρ1 is the flow revenue in state 1. Recallthat w(1, 1) = — w(1, 2).

Solving them for the values, we see that they are the weighted averages of the two present values of the flow revenues

and

where

and

The expressionis the present value of the streams ρ1, and similarly for . Given that the state changes, the expressiongives the probability that state 1 changes to state 2 and analogously for the expression

In choosing alternatives, it is the difference of the two present values that matters, not their magnitudes. With this setup, the difference in values of the preceding example is given by

With the deterministic flows, the first fraction is the difference of the present values. The second factor shows how much that is reduced by the fluctuations of states.

If there are two choices that affect the revenue flows, then we may represent these effects by making theandfunctions of the choice variables as well as some (vector-valued) parameter θ to represent other (macroeconomic) variables to represent economic environments.

Fractions of agents of the same choice may be among its components. For example,may stand for

the present value of state 1, given the choice c and (environmental) parameter θ. Similary forIf the numbers of states are different for different

choices, we just focus on the states we are interested in, and compare their values.

Suppose we are interested infor two alternative choices of

c : c = c1 and c = c2. Suppose further that we know

for θ ∈ Θ, and the inequality is reversed when θ is not in Θ. We may use the likelihood ratio

status of agents' information about these possibilities. We then express the probability of one choice being superior to the alternative as

Here, we have introduced a Gibbs distribution for the discrete choice. This type of distribution is also found in the literature on discrete choice models. See for example Anderson et al. (1993) or Amemiya (1985).

The approach outlined above goes back at least to McFadden (1972), and we turn to his approach next.

6.3

<< | >>
Source: Aoki M.. Modeling Aggregate Behaviour & Fluctuations in Economics. Cambridge: Cambridge University Press,2002. — 281 p.. 2002

More on the topic Value functions: