<<
>>

After the “Trente Glorieuses”

At the end of the 1960s, a shift occurred in the work of French-speaking economists, for several reasons. The training of economists improved in France with the crea­tion in 1960 of a degree in economics where mathematics and statistics were taught.

Increasing numbers of engineers and mathematicians became interested in econom­ics; many of them joined the ranks of university professors, thus blurring the tra­ditional distinction between engineering economists and academics. New research centres were created: the CEPREMAP, in France, and the CORE, in Belgium, con­tributed significantly to the development of economic thinking. In parallel, interna­tional relations between economists became closer; many young French-speaking economists defended their theses abroad, particularly in the United States. The number of works written by economists of different nationalities increased. Thus, the particular identity of a French-speaking tradition gradually diminished.

The contributions of French-speaking economists after 1970 are numerous and concern very different fields. We distinguish three types of work: work proposing a critical reading of mainstream approaches; work developing general equilibrium models; and finally, we group together, in a more heterogeneous category, singu­larly innovative work in microeconomics, especially in industrial economics, and contributions in growth theory emphasising patents and innovation.

New approaches and critiques of “mainstream economics”

Many French-speaking economists have criticised mainstream economics in vari­ous ways. Some of them sought in Sraffa’s work the possibility of developing a theory of production prices which they saw as an alternative to the theory of equi­librium prices. Others were interested in the evolution of the economic system and formed the “ecole de la regulation”. Those who worked on inequality also appeared to be critical of the mainstream: some economists attempted to explain why trade between nations is unequal, some others have sought to build a theory of justice while others have developed empirical research on the evolution of inequality or on the most appropriate measures to fight poverty.

New theoretical perspectives

The publication of Piero Sraffa’s The Production of Commodities by Means of Commodities, of which Serge Latouche published a translation in 1970, aroused considerable interest among French-speaking economists (Arena and Ravix 1990). In their view, Sraffa’s analysis showed it to be possible to frame a theory of prices breaking with false symmetry between producers and consumers, and emphasis­ing the concept of reproduction over that of scarcity, thus regaining the features of the classical theory (Bidard 1991). This theory of production prices represented an alternative to the theory of value as developed by Debreu and Arrow.

They relied on Sraffa’s book and on the arguments put forward by some Cam­bridge economists to criticise the neo-classical theory of capital: since the value of capital depends on the rate of profit, a fall in the latter may make a technique employing more capital more expensive. The demand for capital is therefore not necessarily a decreasing function of the rate of profit. A problem arises, however: while this criticism is relevant to some versions of marginalism, for example Bohm-Bawerk’s analyses of interest theory, it seems inoperative when applied to general equilibrium theory, since this theory does not refer to the notion of aggre­gate capital (Faccarello and Lavergne 1977, 303, Bidard 1991, 93).

It was also hoped to address some poorly resolved issues in Marxist theory, in particular the problem of the transformation of values into prices. For instance, Abraham-Frois and Berrebi (1979, 1984) sought to determine when both condi­tions - equality of the sum of prices and values; equality of aggregate profit and aggregate surplus value - allowing a correct transition from values to prices were simultaneously satisfied. Other economists - Benetti et al. (1976) - were nonethe­less sceptical. They pointed out that the Sraffian assumption that wages are paid post factum was not accidental. On the one hand, it was required to set up the invar­iant standard, and on the other it led to considering the wage as a mere category of distribution.

However, according to them, the wage is a category of distribution only because it is a category of production.[290]

The case of joint production has been debated extensively because it seems to challenge the idea that, in the Sraffian theory, prices are production prices that do not depend on demand. When each activity produces one good (and only one), and when returns are constant, it is always possible to adapt the production level to demand if the matrix of technical coefficients is indecomposable and productive. But if certain activities produce several goods simultaneously, this is no longer the case and it is not possible to eliminate demand from the analysis, which is why Abraham-Frois and Berrebi (1981) considered it to be the “hidden face” of the analysis. D’Autume (1990, 255) concluded that

in joint production, there are several production systems ensuring the pro­duction of all goods. It is therefore demand that determines the equilibrium production system This allows us to stress that demand plays an intimate

role in the determination of prices, which cannot thus be qualified, in a gen­eral way, as production prices.

Bidard (1991, 219) reached a conclusion similar to that of d’Autume, but by dif­ferent means.

Theories of regulation

The work of the regulation school (“ecole de la regulation”) is described by Robert Boyer (b. 1943) as an attempt to extend Marx’s analyses by drawing on the tradition of the Annales school in history (Boyer 2015, 6).[291] Engineer economists - such as Michel Aglietta (b. 1938), Bernard Billaudot (b. 1939) and Robert Boyer - ini­tiated this trend; they worked on the development of macroeconometric models within INSEE or the forecasting department of the Ministry of Finance. In the early 1970s, observing with some disappointment discrepancies between the forecasts of their models and empirical observations, they realised the limitations of modelling and sought the origin of these discrepancies in the structural transformations that the French economy was undergoing at that time.[292]

In Regulation et crises du capitalisme (1976), Aglietta developed a historical analysis of the economic and social evolution of the United States based on Marx’s distinction between production of absolute and relative surplus-value; the inter-war period is said to be a period of transition from a regime of extensive to intensive accumulation.

The central factor in this process is the long-term lowering of the social cost of reproduction of labour power, which leads to a rising capacity for accumulation; this evolution allowed the development of a new mode of regulation based on a new form of work organisation, Fordism, and on a mode of consump­tion characterised by the mass production of standardised goods.

A comparable study was conducted for France by researchers at CEPREMAP, whose conclusion was as follows: stagflation in France was the result of the evo­lution of social relations and economic structures, in particular the modalities of wage bargaining, state intervention and the organisation of the banking system (Benassy, Boyer and Gelpi 1979). They distinguished between competitive regula­tion based on price adjustment and monopoly regulation where prices are adminis­tered or collectively negotiated. Price variations are thus disconnected from market disequilibria, which implies that the regulation of the system is ensured by other mechanisms - extension of indirect wage, unemployment benefits, cyclical budg­etary policies, and guarantee of the solvency of the banking system by the Banque de France.

The history of regulation theories is somewhat paradoxical. The founding works attempted to analyse a type of societal organisation that Aglietta described as Ford- ist; however, this regulation mode entered into crisis precisely at that time. The task of the advocates of regulation theory was to understand this crisis, and to do so, their initial formulations had to be thoroughly revised.

As one moves away from the period of the Trente Glorieuses, the initial expression proves to be increasingly inadequate to capture the emerging modes of development. The concept of hierarchy between institutional forms is thus introduced to account for the gradual domination of the monetary and then financial regime at the expense of the wage relationship Thus, the theory of Fordism must give way to a political economy of institutional change within the different types of capitalism.

(Boyer 2015, 418)

The regulation school has found extensions in the economics of conventions, developed by Robert Salais, Andre Orlean (both former students of the Ecole Poly­technique and ENSAE) and Olivier Favereau. They consider that theories based on rationality, competition and the achievement of equilibrium through prices are not very effective in explaining the coordination of economic activities. Instead, they emphasise the importance of conventions: to them,

the agreement between individuals, even when it is limited to the contract of a commercial exchange, is not possible without a common framework, with­out a constitutive convention.... We try to take into account the variety of possible coordination principles, as well as the existence of situations where a priori antagonistic aims are confronted.

(Dupuy et al. 1989, 141)

Unequal exchange, social justice, inequality and poverty

In his time, Marx had explained how capitalists exploited workers, but he did not analyse the economic mechanisms by which one country could exploit another. This issue was addressed by Arghiri Emmanuel (1911-2001) in L’echange inegal (1969) (“unequal exchange”). Where Ricardo had shown that in international trade no country could lose out, Emmanuel intended to demonstrate that countries could become wealthier at the expense of others. Observing that the real wage rate was different across countries, he gave up the Ricardian theory of wage, and consid­ered wages as an independent variable.[293] Thus, wages determine prices, and differ­ences in wages lead to inequality in trade. International trade is indeed a process of exploitation of poor countries. This is because their exports are sold at a low price because wages are low, whereas imports from rich countries are expensive because wages are high. Thus the labour that poor countries have to provide to pay for their imports is greater than the labour needed to produce the goods they export, in a cumulative way.[294] This analysis, which called into question the convergence of the struggles of the proletariats of the rich countries and national liberation movements, was vigorously debated by Marxist economists, especially by Charles Bettelheim (1913-2006), Christian Palloix and Samir Amin (1931-2018).[295] The debate focused in particular on the issue of wages as an independent variable.

Social justice and inequality is a field of research which witnessed major con­tributions by French-speaking economists after 1970, and on which they are still at the forefront of research. Serge-Christophe Kolm’s thought (b. 1932) stands out in this respect: he remarked that while economists have much to say about efficiency, they are almost silent on the distribution of welfare, that is, on social justice. Worse still, they have left out the question of inter-individual utility com­parison: “Even if not the only issue, the problem of justice is essential, omnipres­ent and inevitable” (Kolm 1972, 13). In order to define optimal distribution, the economist does not rely on a value judgement, but simply observes the opinions and value judgements of individuals as he observes their consumption choices. From these data, he deduces the optimal production of goods and the optimal dis­tribution of wealth; thus, normative economics is based on the objective observa­tion of subjective opinions.

A state of society is called equitable if each person prefers to be in her own situation rather than in any other person S situation, that is, if each thinks of everyone else: ‘I’d rather be in my place than in hers’.... In other words, equity implies that no one can be jealous of anyone else; there is equity when no one has a possible reason to be envious.

(Kolm 1972, 23)

If we accept this definition, a series of questions can be asked. Do fair states pos­sibly exist? Are there efficient fair states? Kolm’s idea provoked a wide-ranging debate discussed by Marc Fleurbaey (1996).

This theoretical work on the notion of justice is complemented by empirical research on inequality. Thomas Piketty (b. 1971) began by looking at inequal­ity through the prism of high income (Les hauts revenus en France. Inegalites et redistribution 1901-1998, 2001). Tax data show that inequalities in France fell significantly between 1914 and the middle of the century and then increased with­out returning to their initial level. This evolution is explained by the variation in the share of very high incomes (the first thousandth, or even ten thousandth, of the income distribution); more precisely, the strong variations in capital income explain the evolution of high incomes and inequality. Piketty’s results are in con­trast to those of Simon Kuznets, who explained that while inequalities increase in the first phase of development, they tend to decrease in a later stage - the French case shows the opposite. Moreover, whereas for Kuznets the evolution of the dis­tribution is essentially due to “natural” causes, Piketty’s explanation is quite dif­ferent: the evolution of the distribution is explained by fiscal policy and by the effects of the major shocks of the wars and the great depression. Le capital auXXL siecle expands this first study in space and time. The main conclusion he draws is that “the history of the distribution of wealth has always been deeply political, and it cannot be reduced to purely economic mechanisms” (Piketty 2013, 20). The reduction in inequality observed in the developed countries from 1900 to 1910 and from 1950 to 1960 is primarily the product of wars and the policies put in place following these shocks; the increase in inequality from 1970 to 1980 appears to be the effect of fiscal and financial policies. However, Piketty accepts that the diffu­sion of knowledge and skills is a central mechanism for reducing inequality. Con­versely, in a world where the rate of profit exceeds the rate of growth - a situation he calls “the central contradiction of capitalism” (2013, 571) - the accumulation and concentration of capital is the main determinant of rising inequality; indeed, the wealth generated in the past accumulates faster than production and wages grow. He believes the answer lies in a progressive tax on capital that would prevent an increase in inequality while preserving the incentives for further accumulation.

Piketty’s analysis suggested the need for tax reforms, the implementation of which may be an issue in an open global economy; in other words, is tax progres­sivity compatible with globalisation, especially financial globalisation? This issue is addressed by Emmanuel Saez and Gabriel Zucman (2019); to them, globalisation does not fundamentally compromise the ability of states to tax large corporations and the wealthiest.[296] How to increase taxation on large fortunes in a globalised world? Saez and Zucman explain that in order to avoid tax evasion by the richest, it would be sufficient to copy the US legislation which stipulates that US citizens are taxable for life in the United States. Conversely, to prevent multinationals from declaring their profits where the tax rate is lowest, what they call the “tax gap” should be taxed.[297]

While Saez and Zucman focus on the taxation of the ultra-rich, Esther Duflo (b. 1972) addresses the poorest. Her work with Rachel Glennerster and Michael Kremer (Duflo et al. 2008) is an attempt to rethink the way economists view the fight against global poverty. She argues that too much effort has been expended in vain to address fundamental questions: What is the ultimate cause of poverty? How much confidence should be placed in market mechanisms? The research has failed to answer these questions, and even the resulting anti-poverty programmes have failed massively. Duflo proposes a contrasting approach: aid programmes must first be evaluated in detail, since “nuts and bolts” play a crucial role in their failure or success. This requires not only scientists and engineers but also what she calls “economist-plumbers”:

The economist-plumber stands at the shoulder of scientists and engineers, but has the safety net of a bounded set of assumptions. She is more concerned about “how” to do things than about “what” to do. In the pursuit of good implementation of public policy, she is willing to tinker. Field experimenta­tion is her tool of choice.

(Duflo 2017, 3)

The very nature of her work leads the economist-plumber to ask herself the prob­lem of causal inference. For example, she must be able to determine whether a reduction in class size improves student outcomes. But in order to ascertain this, one must be able to estimate the results that students in small classes would have achieved if they had been in large classes, and conversely. It is not possible, claims Duflo, to estimate the effectiveness of the measure by tracking a student who moves from one type of class to the other over time. By contrast, the average effects of the programme on a group of students can be obtained by comparing their results with those of a similar group of students who did not receive the programme. Thus, the procedure for analysing the effects of a programme is similar to that used to test the effects of a new drug. Esther Duflo and Abhijit Banerjee applied their research agenda through field experiments, conducted following the creation of the J-PAL (Poverty Action Lab) in 2003. The J-PAL implemented randomised experiments that evaluated the relevance of an anti-poverty measure by comparing its effects to the situation of a control group. Duflo’s field experimentations had a great impact both on fundamental research methods and in the application of development assis­tance programmes by international agencies.

Developments on general equilibrium

Another significant field of contributions by French-speaking economists after 1970 concerned general equilibrium. These contributions were very diverse. First, some economists sought to clarify some of the properties of the Arrow-Debreu model (1953), in particular the uniqueness and properties of aggregate demand functions. As some questions - such as the question of the uniqueness of equilib­rium - remained unanswered, other economists reformulated the problem or turned to old models that had, somewhat surprisingly, been forgotten during the 1950s and 1960s. Finally, some French-speaking economists developed non-Walrasian general equilibrium models.

New insights into the Arrow-Debreu model

The Arrow-Debreu model has been considerably discussed and amended in France since the 1970s. The discussions first focused on the important issue of the unique­ness and the stability of equilibrium.[298] Abraham Wald (1936, 376) had demonstrated the uniqueness of equilibrium under the assumption that the marginal utility of a good depends more on the variation in the quantity of that good than on the vari­ation in the quantity of other goods; then Arrow and Hurwicz (1958, 546) proved the uniqueness of the equilibrium under the assumption that goods are pure substi­tutes. For Debreu (1970), trying to demonstrate the uniqueness of equilibrium in a general way is an overly ambitious goal. An economy can have several equilibria as long as these equilibria are locally unique; however, under very general condi­tions, if an economy has a finite number of equilibria, these equilibria are locally unique. For the number of equilibria to be finite, in the vicinity of any equilibrium, a change in the price vector must affect the demand for goods: the economy cannot remain in equilibrium. In this case, it can be said that the equilibrium is regular; and if all the equilibria of an economy are regular, we say that the economy is regular. For Debreu, situations where the number of equilibria is infinite are not widespread; therefore, any “random” exchange economy can be said to have a finite number of equilibria and these equilibria are locally stable.

Another debate involved the functions of excess demand, based on the obser­vation that aggregate and individual functions have different characteristics.[299] Economists then discussed at length the issue of aggregating individual demands. Sonnenschein (1972, 549) asked the reciprocal question: “Can an arbitrary con­tinuous function, defined on a compact subset C of the interior of a positive ort- hant, be an excess demand function for some commodity in a general equilibrium economy?” Sonnenschein (1973), Mantel (1974) and Debreu (1974) answered positively. Debreu considered an exchange economy with l goods and m consum­ers. The demand of the ith consumer is a function fi (p, p. ei ) of price vectorp and of his income p.ei where e. is the vector of initial endowments of i. The aggregate excess demand function F is defined as the sum of individual demands:

m

f(P) = ∑[fi (p,p∙ei)-ei] (2)

i=1

Under usual hypotheses, F is considered as continuous and fulfils Walras’ law with pF (p) = 0. To Sonnenschein’s question about the possibility of finding a vector of initial endowments e and m consumers satisfying relation (2), Debreu’s answer was yes, provided that the number of consumers is not less than the num­ber of goods.

The reactions of French-speaking economists to these results were very con­trasted. For Yves Balasko (1988, 69), the main interest of the Sonnenschein-

Mantel-Debreu theorem was to show that the local behaviour of aggregate excess demand can take any form, which is important when studying stability. He nev­ertheless admitted that this demonstration did not exclude a specific behaviour of aggregate excess demand when one or more prices tend towards their limits: zero or infinity. Conversely, Alan Kirman (1989, 126) considered that the edifice around the general theory would be pointless, “in the sense that one cannot expect it to house the elements of a scientific theory, one capable of producing empirically falsifiable propositions”. Lastly, Jean-Michel Grandmont’s analysis was different: he tried to demonstrate the stability of equilibrium without relying on assumptions - such as pure substitutability - that are directly related to aggregate demand func­tions. He took up an idea that Werner Hildenbrand (1983) had already put forward: to analyse the relationship between aggregate demand and individual demand, it is necessary to specify the characteristics of the distribution of agents but exploit it differently. While Hildebrand was interested in the role of the income distribu­tion, Grandmont emphasised the heterogeneity of preferences: when it increases, ceteris paribus, the equilibrium becomes unique and stable in any process of “tatonnement”.

Temporary general equilibrium models

Other attempts have been made to reformulate the analysis of general equilibrium. In line with the work of Erik Lindahl (1939), Hicks (1939) and Patinkin (1956), Jean-Michel Grandmont developed temporary general equilibrium models (1970, 1974). Instead of an atemporal equilibrium a la Arrow-Debreu where prices on all markets (spot and forward) are fixed at the initial date, Grandmont developed a model which questioned not only the issue of stability, but that of the existence of equilibrium.

The reference model considers an exchange economy, in which time, t = 1...n, is divided into discrete time periods. Consumption goods ct are not lasting and money, mt, is the only asset that allows the transfer of purchasing power from one period to another. At each date, agents make decisions based on current prices, pt, and prevailing interest rates as well as the expected values of these variables. The agent’s expectations are a function of his information about the current and past prices (viewed as given). Then the expected prices for t are functions yt (pl) of current prices, and the demands of one agent for goods and money are determined by his utility maximisation as follows:

For this to be the case, either expected prices must be equal to current prices or the inflation rate must be independent of the current price level.

By rewriting the utility function as a function of current consumption and the real value of its money holdings, Grandmont showed that the results of program (3) are identical to those of the following system:

Thus, if the elasticity of expected prices is different from 1, the function v is not homogeneous of degree zero in m1 and p1, so it cannot be claimed that utility depends on the agents’ real cash holdings. Grandmont (1974) finally showed that the real cash effect may be too weak, when the elasticity of expectations is 1, to ensure the existence of an equilibrium; the existence of equilibrium requires that the prices anticipated by some agents are not very sensitive to changes in cur­rent prices. Multiple variations of the temporary general equilibrium model were developed later; Grandmont and Laroque (1973), then Fuchs and Laroque (1976) enriched them by introducing the hypothesis of multiple generations.

Overlapping-generations models

Allais (1947) is credited with introducing several generations of agents into a gen­eral equilibrium model; in particular, he developed a model where the life of indi­viduals spans two periods, during which two generations coexist, the young and the old. People only work when they are young; their income and consumption in old age are entirely determined by their behaviour while young. The competitive mechanism leads to the maximisation of social returns in a restricted sense, which Allais considers unsatisfactory: the model takes into account the young person’s anticipation of the satisfaction of his future needs when he will be old, but not this satisfaction in itself. However, even if the agents’ expectations are perfect, these two utilities are different. Allais concludes that

the maximisation of the social output in the restricted sense... actually implies the non-intervention of the State on the savings market, whereas the maximisation of the generalised social output [that is, which considers the satisfaction of the old] is in no way incompatible with such an action.

(1947, 771)[300]

Despite the apparent similarities between the overlapping-generations models and the Arrow-Debreu model, their structures are different. A first problem was thus to establish the existence of an equilibrium and to study its properties. To demon­strate the existence of equilibrium in a model a la Debreu, Lionel McKenzie (1959, 55) had introduced the hypothesis of irreducibility, defined as follows: “In loose terms, an economy is irreducible if it cannot be divided into two groups of consum­ers where one group is unable to supply any goods which the other group wants”. To apply it to an overlapping-generations model where individuals live for only two periods, Balasko, Cass and Shell reformulated it: one cannot improve the welfare of a consumer of generation t without redistributions that involve consumers of the next generation (Balasko et al. 1980). Under this assumption, it is shown that this model has at least one equilibrium, but it can have several and even an infinite number.[301]

In 1985, Olivier Blanchard criticised the reasoning on an infinite horizon of economic agents (which underlies the Ricardian equivalence analysis by Barro in 1974). To do so, he developed a continuous time model, assuming that the instan­taneous probability of death is the same for all agents regardless of their age. In this context, the effects of expenditure financing are quite different from Barro’s. In the case of tax financing, if individuals believe that they are likely to die before taxes increase, then the demand for goods will increase. Similarly, if government distributes public debt securities to agents while increasing taxes to pay the interest on debt, this will have effects on the real economy which will differ depending on whether the economy is open or closed.

The new classical macroeconomists defended the idea that fluctuations were caused by exogenous shocks that disrupted an otherwise stable economy.[302] Grandmont (1985, 996) opposed this idea,[303] arguing that “by contrast to currently accepted views, a competitive monetary economy of which the environment is stationary may undergo persistent and large deterministic fluctuations under laissez faire”. He showed that cycles could appear in a purely endogenous way, even though markets are in equilibrium, in the Walrasian sense of the term, at each date and, expecta­tions are fulfilled. Costas Azariadis and Roger Guesnerie (1982, 1986) demon­strated the possibility of stationary sunspot equilibria.[304] In their analysis, the origin of the cycles lies in the self-realising prophecies of the agents: it is because they believe that the existence or absence of sunspots affects prices that they fluctuate.

Grandmont, as well as Guesnerie and Azariadis, reasoned on a simple model, where, as in Samuelson (1958), identical agents live over two periods: during their youth, they produce a non-durable consumer good, at the rate of one unit of good yt per unit of work lt ; when they grow old, they consume the goods produced by the young. Thus, the utility of a young person is a function of his current leisure time (1 - lt) and of his future consumption level ct+1. His budget constraint estab­lishes that the value of the goods purchased during his old age has to be equal to the value of the goods he produced in his youth. As a result, the labour supply of the young person lt depends on his real wage, that is, on the quantity of goods he will be able to afford during his old age, so that lt = χ (ct+1 ). Utility maximisation shows a substitution effect and an income effect for the labour supply: an increase in the real wage induces people to work more, but the increase in income that it implies can, on the contrary, increase the demand for leisure. If the substitution effect dominates, the labour supply function χ (ct+1) is increasing; otherwise, it is decreasing. If expectations are correct, then the supply of goods yt+1 is equal to the demand ct+1, and the quantity of goods produced in period t is equal to the quantity of work performed yt = lt. We can therefore write yt = χ ( yt+1 ).

The evolution of the economy is then described as a sequence of temporary equilibria that converge, or not, towards the stationary equilibrium defined as a sit­uation where the product is constant. The function χ is increasing if the substitution effect dominates; but under certain conditions the income effect prevails; in this case, the function χ is successively increasing and decreasing. It is therefore not invertible; there are therefore configurations where endogenous cycles can occur as suggested in Figure 14.1.

Figure 14.1 Endogeneous cycles

Source: (Grandmont 1985, 1021)

When no sunspot appears, the labour supply Sba/b, p, pa, pb) is determined by a similar program. They show that if, in the vicinity of equilibrium, the sum of the labour supply elasticities with respect to pa and pb is less than -1, there is at least one stochastic equilibrium that is fully correlated with sunspots.[305]

Thus, French-speaking economists contributed to the development of the theory of endogenous cycles which constitutes a credible alternative to the theories of exogenous cycles developed by new classical economists.

Non-Walrasian equilibria

Yves Younes (1937-1996), Jacques Dreze (1929-2022), Jean-Pascal Benassy (1948-2022) and Malinvaud were undoubtedly pursuing different objectives in their contributions. Dreze (1972) did not refer to the Keynesian tradition; he stud­ied the existence of equilibrium in an economy where the allocation of resources is determined by prices but where these can only vary within given limits and where agents can therefore be rationed. It departs as little as possible from the structure of the Arrow-Debreu model: there is no money and transactions are centralised. An equilibrium is obtained by introducing exchange rationing when the price con­straint is effective.[306]

Younes’s (1970, 1973) concern was to study the role of money in exchange, taking as his starting point Robert Clower’s article (1967) which sought to analyse the micro-foundations of monetary theory. His thesis is that in an Arrow-Debreu model where prices ensure the compatibility of plans, there is logically no need for money; but money is essential in an economy where prices do not necessarily ensure market equilibrium. What characterises his model is not only the rigidity of prices but also the absence of Walrasian markets: agents meet, for example in pairs, and eventually conclude an exchange. To analyse the existence of an equilibrium in such an economy, Younes relies on the notion of strong non-cooperative equi­librium. A similar structure can be found in the article he wrote with Malinvaud (1974) but the emphasis shifts: Malinvaud and Younes explain that Walrasian gen­eral equilibrium theory is unable to give macroeconomics the foundation it needs because its assumptions are too simple, particularly because it admits that “each agent can bring to the market any complex of goods and obtain in exchange any other that has the same value” (1974, 66). Jean-Pascal Benassy (1973) presents himself as an heir to Keynes, Clower and Leijonhufvud, regretting the lack of inter­est in the study of disequilibrium situations, even after the contributions of Keynes and Marx. Benassy developed a model of monopolistic competition to exclude the usual hypothesis of Walrasian models according to which prices are given (both for individuals and for firms). Instead, he proposed the more realistic idea that some firms control the price of the goods they produce.

To illustrate the concepts and mechanisms of general equilibrium models, Benassy (1973) and Malinvaud (1977) built simplified macroeconomic models that distinguish between a series of regimes: classical unemployment, Keynesian unemployment and repressed inflation, in Malinvaud’s terminology. These con­cepts became popular and widely debated. Important lessons have been learned from this literature. The first is that macroeconomics must be based on microeco­nomic foundations and that only general equilibrium theory can provide this foun­dation. However, to do so, it must be profoundly modified, and the interest of the work mentioned above is that it offers a whole series of suggestions in this respect. Several questions remain unanswered, notably the question of expectations. The macroeconomic models of Benassy (1973) and Malinvaud (1977) are uniperiodic (only the current period’s parameters are involved, whereas agents’ expectations about the future can affect their behaviour in the present period). To highlight the role of expectations, Benassy (1973, 1984) and d’Autume (1985) reasoned over two periods. They focused on different factors: Benassy on the analysis of inter­temporal consumer choice and d’Autume on investment. Olivier Blanchard and Jeffrey Sachs (1982) developed an intertemporal model with rational expectations where prices and wages adjust too slowly for markets to be always in equilibrium. This was the first step in the gradual shift from non-Walrasian equilibrium models to what can be described as neo-Keynesian analyses.

Among these neo-Keynesian analyses, we can cite the approach of Benassy (2002), who developed a series of dynamic macroeconomic models which he presents as a synthesis of four paradigms: general equilibrium theory, Keynesian theory, imperfect competition and the dynamic general equilibrium models devel­oped by the new classical economists based on the assumption of rationality of expectations. Blanchard, on his side, supports the Keynesian idea that variations in aggregate demand affect the product. In his 1987 paper with Nobuhiro Kiyotaki, he showed that in an economy with monopolistic competition, if changing prices is costly - even if these costs are low - changes in the quantity of money can significantly affect the output. While in models with purely nominal rigidities, sta­bilising prices can stabilise the product, Olivier Blanchard and Jordi Gali (2007) showed that this is not the case when nominal price rigidities and real wage rigid­ity coexist. In 2010, they drew similar conclusions for an economy with labour market frictions (Blanchard and Gali 2010). In such an economy, there is unem­ployment, but for it to be affected by technological shocks, nominal or real rigidi­ties must exist.

New insights into economic incentive, regulatory economics and innovation

A parallel study of Malinvaud’s (1968) and Laffont’s (1982-85) courses, both taught at the ENSAE, clearly illustrates the evolution of microeconomics between the 1960s and 1980s. Malinvaud’s lectures hardly ever mentioned information and incentives, whereas these two concepts were at the heart of Laffont’s work. This evolution started with the analyses of imperfect information situations which led to the development of a theory of incentives in procurement and regulation (as in the title of Laffont and Tirole’s 1993 book). In a second phase, Jean Tirole, based on these results, developed a theory of industrial organisation, conceived as a study of markets’ functioning characterised (or not) by strategic interactions. Research and development played an essential role in the competition between companies as well as for state intervention. The same is true for the economy as a whole insofar as technical progress is a crucial factor in increasing well-being. This is the reason why we have included here the presentation of neo-Schumpeterian analyses of growth that emphasise the role of innovation in economic development.

Information and incentives

In the early 1970s, economists began to analyse situations where agents have imperfect, asymmetric information and cannot observe the actions of their part­ners. French-speaking economists - notably Jean-Jacques Laffont (1947-2004) and Jean Tirole (b. 1953) - took part in this movement, relying on the analyses of moral hazard (Arrow 1963), anti-selection (Akerlof 1970) and the revelation principle (Gibbard 1973), and contributed significantly to the development of the theory of incentives and regulation.

The work of Jerry Green and Laffont45 (1977a, 1977b) on the role of incentives in planning procedures is part of this approach. They sought to determine the level of production of public goods and the taxes that finance them when taxpayers can behave as “free riders”. To solve this problem, Green and Laffont apply to public goods the procedure that William Vickrey (1961) conceived to make participants in an auction reveal their preferences.[307] [308] On this basis, Green and Laffont devised a procedure that encourages agents to reveal their preferences for public goods. In essence, it involves asking each agent to reveal the utility of that good to him and charging only the cost that his choice inflicts on others. Whatever willingness to pay others claim to have, the best solution for each is to tell the truth: the mecha­nism is incentive-based.

Based on this result, Laffont and Tirole (1986a, 1986b, 1993) developed an anal­ysis of the regulatory process as a relationship between an authority (the principal) and an operator (the agent). In particular, they studied the contracts between the State and companies operating under public service delegation, in the presence of various information asymmetries. How should the State regulate the price at which a public good is supplied by a company? There is an inevitable trade-off between rent and incentive to perform, which is exemplified by the opposition between two regulatory systems, price cap versus cost plus pricing. If the State compensates the firm by a system of price cap, the firm’s incentive to reduce its costs is maximised, since any cost reduction increases its profits, but the rent received by the firm is completely beyond the control of the planner. On the contrary, a “cost plus” pric­ing does not give the firm any incentive to become more efficient, but is the best system for capturing the rent. Laffont and Tirole considered the case where the State contracts with a company that provides a public good whose total cost is C = β q + a, where β is the marginal cost of production considered as independent of the quantity produced, and α the fixed cost. Under perfect information, produc­tion would be set at a level where the marginal utility of the good would be equal to its social cost and the State would pay the firm a transfer that would just cover its costs. The problem is that the State can observe the quantity produced but does not know the true value β of marginal cost: it faces pure adverse selection. The asymmetry of information will lead to changes in the rule: the State will inform the firm of the quantity q (β) it must produce if it announces a marginal cost β and the transfer t (β) it will receive. The firm, knowing the procedure, will inform the planner of the value of β that maximizes its profit t(β}-β q(β')-a. The price β is higher than the marginal cost β by an amount which represents what the State has to give up to the firm due to information asymmetry. However, according to Laffont and Tirole, the firm can make efforts, called e, to reduce its marginal cost (so that its total cost is C = (β - e) q + a ); but these efforts are unobservable by the State, which only knows β and a. The problem for the State is now one of pure moral hazard; the issue will then be to set a rule that encourages the firm to make an optimal effort. And to reach the optimum of perfect information, the firm must be paid a transfer equal to the social value of the product minus a constant that ensures that the firm’s profit will not be negative when it makes the effort that minimises its costs.

If we now assume that the State faces both an adverse selection problem - it does not know the marginal cost - and a moral hazard problem - it does not know the firm’s effort -, it must advise the firm that if it claims that its marginal cost is β, it will have to produce a quantity q (β) and receive a transfer t (β '); the purpose of this scheme is to encourage the firm to reveal all information on its costs and effort. The firm’s manager will then have an incentive to share the value of β which max­imises its profit; the scheme is incentive-based if this value is the true marginal cost. The levels of output and effort will certainly be lower than if information were per­fect. But, relative to the level of output, the effort is optimal because the firm ben­efits from the full effect of its effort on its cost. Laffont and Tirole (1993) extended these results to the case in which the firm produces many goods, and in which the regulator wants to encourage the firm to improve the quality of its products.

Theory of industrial organisation

Analyses of industrial organisation have undergone a profound transformation in recent years as non-cooperative game theory has emerged as the appropriate tool for studying strategic interactions between decision-makers.[309] A classic example of strategic interaction is the behaviour of an incumbent firm seeking to prevent the entry of new competitors into its industry.

Sylos-Labini (1962) considered that the incumbent firm should set the price low enough to deter potential competitors. Nevertheless, such a strategy is not always efficient, since it is doubtful that the incumbent firm would maintain its price after the entry of a competitor. In order to gain credibility, it must incur sunk costs, referred to here as investments. Finally, the idea that an incumbent firm must over­invest to prevent the entry of new competitors was widely accepted by economists. However, Drew Fudenberg and Jean Tirole (1984) showed that such an investment could be a strategic drawback as it lessens the incentive to respond to the entrant in an offensive manner. Depending on the case, the incumbent firm should over­invest or under-invest. To capture this idea, a benchmark level of investment must be specified; if the established firm deters entry, the minimum investment needed to keep the competitor out will be the basis for comparison; and in the case where it accepts the entry of a competitor, the investment made by the established firm will be compared to that which it would have made if it had regarded the entrant’s behaviour as given.

To illustrate their point, Fudenberg and Tirole reasoned on two periods and two agents, with advertising expenditure as sunk costs. They showed that, under certain conditions, the incumbent must underinvest because it makes credible its threat to lower its price if a competitor enters the market. Conversely, if the incumbent can tolerate the entry of a competitor, it will overinvest and become what they called a “puppy dog” avoiding a price war with the entrant. Thus the level of investment depends on the nature of irrecoverable expenditure and the slope of reaction func­tions. In the case where price discrimination is impossible and the incumbent’s customer base remains loyal, the mechanism is as follows: by increasing its adver­tising expenditure, it reduces the potential market for the entrant to sell its product; however, if the competitor enters and offers at a lower price than the incumbent, the latter will not necessarily lower its selling price because such a decision would reduce the revenue earned from sales to its customers.

Finally, the optimal strategy depends on the circumstances; rather than trying to establish general results that would apply to any sector, Tirole showed that the specific characteristics of a given activity determine the outcome. He thus provided a framework for the development of empirical work.[310]

The evolution of technology and regulation led economists to focus on indus­tries where network effects occur and where exchangers communicate through a platform (two-sided markets). When the more people use a good, the more useful it is to a user, a network effect is said to exist. But if several networks coexist, potential buyers try to find out which one will be used the most. This behaviour leads to inefficiencies. On the demand side, users delay their purchases to find out which network is more successful. On the supply side, the problem is whether or not suppliers have an advantage in making their products compatible. Laffont, Rey and Tirole (1998) studied the particular case of telecommunications at a time when several governments wanted to introduce competition into an activity where a com­pany - public or private - had previously held monopoly power. Two sources of market failure were identified. It was pointed out that during the transition period the incumbent was reluctant to allow entrants access to its network at an appropri­ate price. It was also suggested that firms might use their interconnection agree­ments to engage in collusive practices. Based on the case of a duopoly, Laffont and his co-authors distinguish two cases. In the first case, the price of the call is the same whether or not the callers belong to the same network. In the second case, an operator may differentiate its tariffs, with customers paying more if they call a number in a competing network. In the first case, the existence of an equilibrium is not certain when the cost of access to the competing network and/or the degree of substitutability between the two networks is high. In the area where an equilibrium is possible, a decrease in the cost of access lowers prices and profits; the incentive to lower tariffs is weak. In the second case, firms benefit from charging different rates depending on whether their customer is calling an internal or external number. This policy often leads to a misallocation of resources. But in the case of competi­tion between equals, it can increase welfare.

Most often when there is a network effect, platforms allow buyers and sellers to interact. They charge a fixed fee, say a membership fee, and a variable fee on each transaction. Laffont and Tirole define these two-sided markets, in a narrow sense, as those where the allocation of rights between buyers and sellers affects the volume of transactions and the profits of the platform. Let us consider a very simple case where the platform is a monopoly and the participants do not pay a fixed cost. If Pb and ps are the cost to buyers and sellers, respectively, the total price that maximises the profit of the platform is given by the traditional Lerner formula (Rochet and Tirole 2003, 997); the price structure is given by the ratio of the elasticities of demand ηB and ηs :

The issues of market form analysis, state intervention and firm regulation have been central to the concerns of French-speaking economists in recent times. They have emphasised the idea that planners have only imperfect information at their disposal to accomplish this task and that, for regulation to be effective, it is neces­sary that the mechanisms put in place encourage the managers of firms to reveal the private information at their disposal and to implement measures likely to improve the productivity of their firms.

A Neo-Schumpeterian theory of growth

A final important contribution in the recent period is the analysis of innovation and growth by Philippe Aghion (b. 1956) and Peter Howitt. Their aim is to understand how the decision to innovate is made and to study its effects, by considering innova­tion as “creative destruction” as Schumpeter did: creation by providing new goods, new techniques, new firms, as well as destruction by making old skills, old products and existing firms obsolete. This double nature implies that the actual direction of the effects of an innovation is, a priori, often indeterminate. If, for example, research becomes more fruitful, output growth may be stimulated or hindered depending on whether the expected profits from the innovation increase or decrease. Similarly, innovations can lead to a higher or lower than optimal growth rate. The innovator considers neither the losses suffered by those he crowds out nor the gains of those who, building on his research, will discover new processes or new products.

To address these issues, Aghion and Howitt (1992, 1998, 2009) proposed sev­eral models which, although their specifications differ, follow the same logic. In

The Economics of Growth (2009), they distinguish two types of goods: a final consumer good y, is produced with labour, L, the efficiency of which is At, and an intermediate consumption good x so that: yt =( AtL) xt~a. In a competitive market, the relative price of the intermediate good is equal to its marginal produc­tivity. Innovation consists in bringing in a new intermediate good that increases the efficiency of labour At, by a factor γ. The probability that research leads to innovation, μ, in an increasing function of the ratio of research and development expenditure Rt to labour efficiency γ At: the more advanced the technology, the more one has to spend to innovate. The innovator has a monopoly as long as no competitor has succeeded in developing a more efficient intermediate good. The head of a company intending to innovate compares the research and development expenditure that he will have to undertake with the mathematical expectation of the revenue generated by that innovation.

The growth rate is, in this model, random: it is zero when, during the period, no innovation is made; otherwise, it is equal to the growth rate of labour efficiency γ. On average, it is an increasing function of the ratio of research expenditure to labour efficiency, which determines the probability that research will be successful. From their analysis, Aghion and Howitt drew five conclusions:

1. The more productive the research, the faster the growth. This suggests that countries that invest more in higher education have an advantage.

2. Growth increases with the size γ of innovations. Thus, a country with low pro­ductivity can progress rapidly if, when innovating, it is able to reach the techno­logical frontier.

3. A patent scheme can, by making imitation more difficult, stimulate growth.

4. The model suggests that competition is harmful to growth because it reduces the innovator’s profits. Since this conclusion seems to be incompatible with the empirical results, Aghion et al. (1997, 2001) modify their hypothesis by admit­ting that a less productive firm than the technological leader can not only sur­vive but also catch up. It may be that competition favours innovation, as firms seek to innovate to escape a situation where they would be overtaken by their competitors.

5. Finally, in Aghion and Howitt’s model, the larger the population the faster the growth, because the market available to the innovator is larger. However, empir­ical studies reject this idea; Aghion and Howitt (1998, 407) then propose a new version of their model in which, when the population is larger, the multiplication of varieties of the final good reduces the efficiency of research efforts, which are dispersed over a multitude of goods. The relationship between population and growth vanishes.

Conclusion

Economic knowledge has always crossed borders and French-speaking economists have long maintained close relations with English and German-speaking scholars. However, until the middle of the twentieth century, French-speaking students, or more broadly all French-speaking people interested in political economy, preferred to read works written in French. Even if a French translation of Alfred Marshall’s Principles was available - and it was belatedly in 1906 - French-speaking students continued to work on Charles Gide’s Principes instead.

This situation has gradually evolved. Textbooks such as Mankiw’s or Blanchard’s are now widely distributed, and no longer bear the mark of the national legacies of the past. More importantly, research is now often conducted jointly by economists of different nationalities. This collaboration of researchers with different back­grounds has proved remarkably productive, bringing many French economists to the most influential positions, both in universities and in international institutions. The specific feature of French economic thinking has gradually faded, even if the most internationally renowned French economists have long continued to carry the legacy of the great tradition of engineer-economists.

<< | >>
Source: Faccarello G., Silvant C. (eds.). A History of Economic Thought in France: The Long Nineteenth Century. Routledge,2023. — 438 p. 2023

More on the topic After the “Trente Glorieuses”: