Shy industrial organization pdf




















Delay in Internet services: Real-time information on stock prices is sold for a premium, whereas twenty- minute delayed information is often provided for free. Selling H to type 2, and L to type 1 consumers: Introducing the damaged good into the market. Most profitable!!! Note: The three bullet marks represent candidate profit-maximizing price pairs. The Big-3 are among the largest advertisers. To be explained below. Stage II : Price game, p1 and p2. Stage II: Equilibrium in prices for given advertising levels We look for a Nash equilibrium in p1 , p2.

Result 4. We omit this analysis. That is, if innovation reduces the cost to a level where the associated pure monopoly price is lower than the unit production costs of the competing firms.

The model below is Exercise That is, firm 2 does not enter the race. Definition 5. Assumption 5. The present model does not fit this market structure. Result 5. We need the following Lemma. Lemma 5. Formally, xI increases with T. Formally, xI increases with an increase in a and decreases with an increase in c.

In general, computer simulations can be used to find the welfare-maximizing T in case differentiation does not lead to an explicit solution, or when the discrete nature of the problem i. However, one conclusion is easy to find: Result 5. Hence, by 5. Differentiating 5. In this sense, limit pricing will not be used to deter entry. We have to show that no restaurant can increase its profit by undercutting the price of the competing restaurant. A similar argument reveals why the Chinese restaurant would not undercut the Japanese restaurant.

Result 6. If the incumbent withdraws from the Chinese restaurant and operates only the Japanese restaurants, Lemma 6. However, entrant does not know for sure that 1 is a low-cost. Result 7. In contrast, we use the notation list to denote a vector where the order does matter.

For example, an outcome is a list of actions where the first action on the list is the action chosen by player 1, the second by player 2, and so on. The literature uses the term action profile to describe the list of actions chosen by all players, which is what we call an outcome.

For our purposes there is no harm in using the term outcome instead of the term action profile for describing this list of actions.

In the literature one often uses the term stoutly instead of the term action and therefore strategy set instead of action set , since in a normal form game, there is no distinction between the two terms. However, when we proceed to analyze extensive form games section 2. The best way to test whether Definition 2.

A simple way to describe the data that define a particular game is to display them in a matrix form. Consider the following game described in Table 2. We now argue that Table 2. Third, the entries of the matrix i. The story behind this game is as follows. If both countries engage in a war, then each country gains a utility of 1. If one country plays WAR while the other plays PEACE, then the aggressive country reaches the highest possible utility, since it "wins" a war against the nonviolent country with no effort.

Under this outcome the utility of the "pacifist country" should be the lowest equal to zero in our example. Page In the literature, the game described in Table 2. Instead of having two countries fighting a war, consider two prisoners suspected of having committed a crime, for which the police lack sufficient evidence to convict either suspect. The two prisoners are put in two different isolated cells and are offered a lower punishment or a higher payoff if they confess of having jointly committed this crime.

In the present analysis we refrain from raising the question whether the game described in Table 2. Instead, we ask a different set of questions, namely, given that countries in the world behave like those described in Table 2. In order to perform this task, we need to define equilibrium concepts.

Once the game is properly defined, we can realize that games may have many outcomes. Therefore, by simply postulating all the possible outcomes four outcomes in the game described in Table 2. For example, can you predict how a game like the one described in Table 2.

Will there be a war, or will peace prevail? Note that formulating a game without having the ability to predict implies that the game is of little value to the researcher. In order to make predictions, we need to develop methods and define algorithms for narrowing down the set of all outcomes to a smaller set that we call equilibrium outcomes. We also must specify properties that we find desirable for an equilibrium to fulfill.

Ideally, we would like to find a method that would select only one outcome. If this happens, we say that the equilibrium is unique. However, as we show below, the equilibrium concepts developed here often fail to be unique.

Moreover, the opposite extreme may occur where a particular equilibrium may not exist at all. A game that cannot be solved for equilibria is of less interest to us since no real-life prediction can be made. Before we proceed to defining our first equilibrium concept, we need to define one additional piece of notation.

Now, pick a certain player, whom we will call player i, e. Then, we are left with the list of what all players are playing except player i, which we denote by. Note that after this minor surgical operation is performed, we can still express an outcome as a union of what action player i plays and all the other players' actions. That is, an outcome a can be expressed as. Our first equilibrium concept, called equilibrium in dominant strategies, is a highly desirable equilibrium, in the sense that if it exists, it describes the most intuitively plausible prediction of what players would actually do.

The following definition applies for a single player in the sense that it classifies actions in a player's action set according to a certain criterion. A particular action is said to be a dominant action for player i if no matter what all other players are playing, playing always maximizes player i's payoff.

Formally, for every choice of actions by all players except i, ,. It has to be shown that no matter what player 2 does, player 1 is always better off by starting a war. Thus, we have to scan over all the possible actions that can be played by player 2. We now turn to defining our first equilibrium concept. An equilibrium in dominant actions is simply an outcome where each player plays a dominant action. Clearly, since WAR is a dominant action for each player in the game described in Table 2.

Although an equilibrium in dominant actions constitutes a very reasonable prediction of how players may interact in the real world, unfortunately, this equilibrium does not exist for most games of interest to us. To demonstrate this point, let us analyze the following Battle of the Sexes game described in Table 2.

The intuition behind this rather. That is, assuming that the payoffs to the players in Table 2. Both of them gain a higher utility if they go together to one of these events. Thus, the Battle of the Sexes is sometimes referred to as a coordination game. The Battle of the Sexes game exhibited in Table 2.

For example, in chapter 10 we analyze economies in which products operate on different standards such as different TV systems.

The Battle of the Sexes game happens to be an ideal theoretical framework to model two firms with two available actions: choose standard 1, or standard 2. Failure to have both firms choosing the same standard may result in having consumers reject the product, thereby leaving the two firms with zero profits. After formulating the Battle of the Sexes game, we now seek to find some predictions for this game.

However, the reader will probably be disappointed to find out that:. It is sufficient to show that one of the players does not have a dominant action. In this case, there cannot be an equilibrium in dominant actions since one player will not have a dominant action to play.

So, we have shown that one player does not have a dominant action, and this suffices to conclude that Definition 2. So far we have failed to develop an equilibrium concept that would select an outcome that would be a "reasonable" prediction for this model.

In , John Nash provided an existence proof for an equilibrium concept earlier used by Cournot when studying duopoly that has become the most commonly used equilibrium concept in analyzing games.

The general methodology for searching which outcomes constitute a NE is to check whether players benefit from a unilateral deviation from a certain outcome. Once we find an outcome in which no player can benefit from any deviation from the action played in that outcome, we can assert that we found a NE outcome.

We continue our discussion of the NE with the investigation of the relationship between Nash equilibrium and equilibrium in dominant actions. To demonstrate the relationship between the two equilibrium concepts, we first search for the NE outcomes for the game described in Table 2. Not surprisingly, yes, it can! Since an equilibrium in dominant actions means that each player plays a dominant action, no player would find it beneficial to deviate no matter how the others play.

In particular, no player would deviate if the other players stick to their dominant actions. An equilibrium in dominant actions outcome is also a NE. However, a NE outcome need not be an equilibrium in dominant actions. We leave it to the reader to verify that no other outcome in this game is a NE. Therefore, this equilibrium is called unique. The second part of Proposition 2.

We now demonstrate that a Nash equilibrium need not be unique. For example, applying Definition 2. However, this follows immediately from 2. So far we have seen examples where there is one or more NE. That is, as in the Battle of the Sexes game displayed in Table 2. If the equilibrium is not unique, the model has a low prediction power. In contrast, Table 2. Therefore, consider the variant of the Battle of the Sexes game after thirty years of marriage.

The intuition behind the game described in Table 2. We must prove that each outcome is not a NE. That is, in each of the four outcomes, at least one of the player would find it beneficial to deviate.

More generally, in an N-player game, the best-response function of player i is the function , that for given actions of players 1, 2, Let us now construct the best-response functions for Jacob and Rachel described in the Battle of the Sexes game given in Table 2.

It is straightforward to conclude that. Now, the importance of learning how to construct best-response functions becomes clear in the following proposition:. By Definition 2. Hence, by Definition 2. That is, in a NE outcome, each player chooses an action that is a best response to the actions chosen by other players in a NE. Proposition 2. The procedure for finding a NE is now very simple: First, we calculate the best-response function of each player. Second, we check which outcomes lie on the best-response functions of all players.

Those outcomes that we find to be on the best-response functions of all players constitute the NE outcomes.

For example, in the Battle of the Sexes game, 2. So far, our analysis has concentrated on defining equilibrium concepts that enable us to select equilibrium outcomes for predicting how players would end up acting when facing similar games in the real world.

However, we have not discussed whether the proposed equilibria yield efficient outcomes. That is, we wish to define an efficiency concept that would enable us to compare outcomes from a welfare point of view. In particular, using the Pareto efficiency criterion, we wish to investigate whether there are outcomes that yield higher payoff levels to some players without reducing the payoffs of all other players.

For example, in the Peace-War game of Table 2. In the Battle of the Sexes game of Table 2. Our analysis so far has concentrated on normal form games where the players are restricted to choosing an action at the same time. In this section we analyze games in which players can move at different times and more than once. Such games are called extensive form games.

Extensive form games enable us to introduce timing into the model. Before going to the formal treatment, let us consider the following example. A terrorist boards a flight from Minneapolis to New York. After thirty minutes, after reaching a cruising altitude of thirty thousand feet, the terrorist approaches the pilot and whispers to the pilot that she will explode a bomb if the pilot does not fly to Cuba.

Figure 2. One player is the pilot and the other is the. The game is represented by a tree, with a starting decision node point I , other decision nodes IIN and IIC , and terminal nodes end points. Note that in some literature, the term vertex vertices is used in place of the term node s. The branches connecting decision nodes, and decision nodes to terminal nodes describe actions available to the relevant player on a particular decision node. In this Pilot-Terrorist game, after hearing the terrorist's threat, the pilot gets to be the player to choose an action at the starting node.

At the starting node, the pilot's action set is given by. In this simple game, the terrorist's action sets happen to be the same at both nodes, but this need not always be the case.

We can now give a formal definition to extensive form games with perfect information. Extensive form games with imperfect information are defined in Definition 2. A game tree containing a starting node, other decision nodes, terminal nodes, and branches linking each decision node to successor nodes.

For each player i, a specification of i's action set at each node that player i is entitled to choose an action. Our preliminary discussion of extensive form games emphasized that a player may be called to choose an action more than once and that each time a player chooses an action, the player has to choose an action from the action set available at that particular node.

Therefore, we need to define the following term. A strategy for player i denoted by si is a complete plan list of actions, one action for each decision node that the player is entitled to choose an action. Thus, it is important to note that a strategy is not what a player does at a single specific node but is a list of what the player does at every node where the player is entitled to choose an action.

What are the strategies available to the terrorist in the Pilot-Terrorist game described in Figure 2. Since the terrorist may end up in either node IIC or IIN, a strategy for the terrorist would be a specification of the precise action she will be taking at each node. That is, although it is clear that the terrorist will reach either node IIC or IIN but not both, a strategy for this player must specify what she will do at each of the two nodes.

Now that the game is well defined, we seek to find some predictions. The first step would be to search for a Nash equilibrium. Recalling our definition of Nash equilibrium Definition 2. It turns out that in many instances transforming an extensive form game into a normal form makes it easier to find the Nash equilibria. Table 2. Note that here, as in the Battle of the Sexes game, multiple NE greatly reduce our ability to generate predictions from this game.

For this reason, we now turn to defining an equilibrium concept that would narrow down the set of NE outcomes into a smaller set of outcomes. In the literature, an equilibrium concept that selects a smaller number of NE outcomes is called a refinement of Nash equilibrium, which is the subject of the following subsection.

In this subsection we define an equilibrium concept that satisfies all the requirement of NE see Definition 2. This equilibrium concept may be helpful in selecting a smaller set of outcomes from the set of NE outcomes, by eliminating some undesirable NE outcomes. Before we proceed to the formal part, let us go back to the Pilot-Terrorist game and look at the three NE outcomes for this game.

Comparing the three NE outcomes, do you consider any equilibrium outcomes to be unreasonable? What would you suggest if the pilot were to hire you as her strategic adviser? Well, you would probably tell the pilot to fly to New York. By looking at the terrorist's payoffs at the terminal nodes in Figure 2. We now want to formalize an equilibrium concept that would exclude the unreasonable Nash equilibria. In particular, we look for an equilibrium concept that would exclude outcomes where the terrorist commits herself to the BOMB action, since such an action is incredible.

Moreover, we seek to define an equilibrium concept where the player who moves first the pilot in our case would calculate and take into account how subsequent players the terrorist in the present case would respond to the moves of the players who move earlier in the game. Hence, having computed how subsequent players would respond, the first player can optimize by narrowing down the set of actions yielding higher payoffs.

In the Pilot-Terrorist example, we wish to find an equilibrium concept that would generate a unique outcome where the pilot flies to New York. A subgame is a decision node from the original game along with the decision nodes and terminal nodes directly following this node. A subgame is called a proper subgame if it differs from the original game.

The two proper subgames are illustrated in Figure 2. An outcome is said to be a subgame perfect equilibrium SPE if it induces a Nash equilibrium in every subgame of the original game. Definition 2. In particular, a SPE outcome must be a NE for the original game since the original game is a subgame of itself. Note that in each subgame, the action NB is a NE. Next, each proper subgame has only one NE, namely, the terrorist chooses NB. Thus, we have shown that using the SPE, refines the NE in the sense of excluding some outcomes which we may consider unreasonable.

The general methodology for finding the SPE outcomes is to use backward induction, meaning that we start searching for NE in the subgames leading to the terminal nodes.

Then, we look for NE for the subgames leading the subgames leading to the terminal nodes, taking as given the NE actions to be played in the last subgames before the terminal nodes. Then, continuing to solve backwards, we reach the starting node and look for the action that maximizes player 1's payoff, given the NE of all the proper subgames.

Note that the backward induction methodology is particularly useful when the game tree is long. Finally, another common methodology is to first find the NE outcomes for the game, say by transforming the extensive form representation into a normal form representation see subsection 2.

Then, once we have the set of all NE outcomes, we are left to select those outcomes that are also NE for all subgames. This can be done by trial and error, or, as we do in the proof of Claim 2.

Repeated games are used only once in this book, in section 6. A repeated game is a one-shot game that is identically repeated more than once. The importance of analyzing repeated games is that under certain circumstances cooperative outcomes, which are not equilibrium outcomes under a one-shot game, can emerge as equilibrium outcomes in a repeated, noncooperative game. A repeated game is a special ''kind'' of an extensive form game in which each period, all players move simultaneously and each player's action set does not vary with time; in a more general extensive form game, actions sets may differ from one time period to another.

More precisely, a repeated game is a one-shot game see Definition 2. Each period, after the game is played, the players move to the next period. In a subsequent period, the players observe the actions chosen by all players including their own in all previous periods, and only then simultaneously choose their actions for the new game.

Thus, the important thing to remember is that players can perfectly monitor all the actions chosen in earlier periods prior to choosing an action in a subsequent period. The data collected by perfectly monitoring the actions played in each period is called a history at a period. To define the players' strategies under a repeated game, we now wish to modify Definition 2.

Consider our Peace-War game described in Table 2. We now make the following assumption regarding the players' payoffs in a repeated game:. Then, the payoff to player i when the game is repeated T times is defined by. If the number of players is greater than two, then replace with at, where.

We distinguish between two types of repeated games: a finitely repeated game , and an infinitely repeated game. Suppose that the Peace-War game is repeated T times, in periods 1, 2, In Section 2. If we apply Definition 2. More precisely, how many strategies are there in country 1's strategy set? Let us first look at the second period. In the second period there could be four possible histories resulting from the four possible first-period lists of players' actions.

Now, in order to fully specify a strategy, country 1 has to specify which action will be taken for every possible history.

Hence; the number of second-period actions is On top of this, there are two possible actions available to country 1 in period 1. Thus, Proposition 2. Using backward induction, let us suppose that the countries have already played in T-1 periods, and that now they are ready to play the final T's period game. Then, since period T is the last period that the game is played, the T's period game is identical to the single one-shot Peace-War game.

Now, consider the game played in period T - 1. Both players know that after this game is completed, they will have one last game to play in which they both will not cooperate and play WAR. Hence, in T - 1 each player would play the dominant strategy WAR. Working backwards, in each of the proceeding periods T-2, T-3 until period 1, we can establish that WAR will be played by every player in each period.

Now, suppose that the game is repeated infinitely many times i. The difference between the infinitely repeated game and the small or large but finitely repeated game is that in an infinitely repeated game, backward induction used in the proof of Proposition 2. We restrict the discussion of strategy in infinitely repeated games to one type, called trigger strategies. That is, country i cooperates by playing PEACE as long as no country including itself deviates from the cooperative outcome.

However, in the event that a country deviates even once, country i punishes the deviator by engaging in a WAR forever. We now seek to investigate under what conditions the outcome where both countries play their trigger strategies constitutes a SPE. If the discount factor is sufficiently large, then the Outcome where the players play their trigger strategies is a SPE.

Then, if country 1 deviates and plays , Table 2. However, if country 1 does not deviate, then both countries play PEACE indefinitely, since country 2 plays a trigger strategy. Hence, both countries gain a payoff of 2 each period. Comparing 2. So far, we have showed that when both countries play the trigger strategy no country has the incentive to unilaterally deviate from playing PEACE.

In the language of game theorists, we showed that deviation from the equilibrium path is not beneficial to any country. However, to prove that the trigger strategies constitute a SPE we need to show that if one country deviates and plays WAR, the other country would adhere to its trigger strategy and would play WAR forever.

In the language of game theorists, to prove SPE we need to prove that no player has the incentive to deviate from the played strategy even if the game proceeds off the equilibrium path. Altogether, the trigger strategies defined in Definition 2.

Hence, cooperation cannot be a SPE since the players wish to maximize only their first period profit. This discussion leads to the following corollary:. In an infinitely repeated game cooperation is easier to sustain when players have a higher time discount factor.

In this section we have shown that a one-shot game with a unique non-cooperative Nash equilibrium can have a cooperative SPE when it is repeated infinitely. However, note that in the repeated game, this SPE is not unique. For example, it is easy to show that the noncooperative outcome where each country plays WAR in every period constitutes a SPE also. Moreover, the Folk Theorem Folk, because it was well known to game theorists long before it was formalized states that for a sufficiently high time discount factor, a large number of outcomes in the repeated game can be supported as a SPE.

Thus, the fact that we merely show that cooperation is a SPE is insufficient to conclude that a game of this type will always end up with cooperation. All that we managed to show is that cooperation is a possible SPE in an infinitely repeated game. Finally, let us look at an experiment Robert Axelrod conducted in which he invited people to write computer programs that play the Prisoners' Dilemma game against other computer programs a large number of times.

The winner was the programmer who managed to score the largest sum over all the games played against all other programs. The important result of this tournament was that the program that used a strategy called Tit-for-Tat won the highest score. The Tit-for-Tat strategy is different from the trigger strategy defined in Definition 2.

In the Tit-for-Tat strategy, a player would play in period t what the opponent played in period t - 1. Thus, even if deviation occurred, once the opponent resumes cooperation, the players would switch to cooperation in a subsequent period. Under the trigger strategy, once one of the players deviates, the game enters a noncooperative phase forever.

The tools developed in this appendix are not implemented in this book, and are brought up here only for the sake of completeness. Thus, this appendix is not necessary to study this book successfully, and the beginning readers are urged to skip this appendix. Games with mixed actions are those in which the players randomize over the actions available in their action sets.

This is not because we think that players do not choose actions randomly in real life. On the contrary, the reader can probably recall many instances in which he or she decided to randomize actions.

The major reason why games with mixed actions are hard to interpret is that it is not always clear why the players benefit from randomizing among their pure actions. The attractive feature of games with mixed actions is that a Nash equilibrium in mixed actions always exists.

Recall that Proposition 2. The reason for focusing on the game in. We now wish to modify a game with pure strategies to a game where the players choose probabilities of taking actions from their action sets. Recall that by Definition 2. An outcome of a game with mixed actions is the list of the realization of the actions played by each player. The reader has probably noticed that Definition 2.

The reason for introducing this term is that in a game with mixed actions, the players choose only probabilities for playing their strategies, so the outcome itself is random.

In games with pure actions, the term action profile and the term outcome mean the same thing since there is no uncertainty. However, in games with mixed actions, the term action profile is used to describe the list of probability distributions over actions chosen by each player, whereas the term outcome specifies the list of actions played by each player after the uncertainty is resolved.

Our definition of the "mixed extension" of the game is incomplete unless we specify the payoff to each player under all possible action profiles. A payoff function of a player in the mixed-action game is the expected value of the payoffs of the player in the game with the pure actions. According to Definition 2. Applying the NE concept, defined in Definition 2. An action profile where , , is said to be a Nash equilibrium in mixed actions if no player would find it beneficial to deviate from her or his mixed action, given that the other player does not deviate from her or his mixed action.

We now turn to solving for the Nash equilibrium of the mixed-actions extension game of the game described in Table 2. Substituting the payoffs associated with the "pure" outcomes of the game in Table 2. Restating Definition 2. It is easy to check that the players' payoffs 2. The best-response functions of each player are drawn in Figure 2.

Equations 2. Although a NE in pure actions does not exist for the game described in Table 2. There exists a unique NE in mixed actions for the game described in Table 2. The proposition follows directly from the right-hand side of Figure 2.

Finally, the best-response functions given in 2. Since the equilibrium occurs when the two curves intersect in their "middle" sections, we have it that under the NE mixed outcome, each player is indifferent to the choice among all other probabilities that can be played, assuming that the other player does not deviate from the mixed action.

This result makes the intuitive interpretation of a mixed-action game rather difficult, because there is no particular reason why each player would stick to the mixed action played under the NE.

Games with imperfect information are brought up here only for the sake of completion, and the beginning readers are urged to skip this appendix. Games with imperfect information describe situations where some players do not always observe the action taken by another player earlier in the game, thereby making the player unsure which node has been reached. For example, Figure 2. In Figure 2. The information set tells us that in this game, the terrorist cannot distinguish whether node IIC or IIN has been reached.

Thus, when the terrorist has her turn to make a move, she has to choose an action without knowing the precise node she is on. An information set for a player is a collection of nodes in which the player has to choose an action.

When a player reaches an information set, the player knows that the particular information set has been reached, but if the information set contains more than one node, the player does not know which particular node in this collection has been reached. A game with imperfect information if one of the information sets contains more than one node; 2. A game with perfect information if each information set contains a single node.

Thus, all the extensive form games analyzed in Section 2. We now slightly extend our definition of a strategy Definition 2. In a game with imperfect information, a strategy for a player is a list of actions that a player chooses at any information set where the player is entitled to take an action. Thus, Definition 2.

Under perfect information, of course, Definitions 2. Finally, we need to extend our definition of subgames Definition 2. A subgame is an information set that contains a single node, and all the subsequent decision and terminal nodes, provided that all subsequent nodes are not contained in information sets containing nodes that cannot be reached from the subgame. However, the nodes labeled B, C, E, and F are not starting nodes for a subgame since some subsequent nodes are contained in information sets containing nodes that cannot be reached from these nodes.

For example, the modified Pilot-Terrorist game described in Figure 2. We conclude our discussion of games with imperfect information with solving for NE and SPE for the modified Pilot-Terrorist game described in Figure 2. Thus, in the Pilot-Terrorist game under imperfect information, the number of outcomes has been reduced from eight to four since the terrorist now takes a decision at one information set compared with two nodes, under perfect information.

Consider the normal form game described in Table 2. Find the conditions on the parameters a, b, c, d, e, f, g, and h that will ensure that. Consider the Traveler's Dilemma, where two travelers returning home from a remote island where they bought identical rare antiques find out that the airline has managed to smash these rare antiques. The airline manager assures the passengers of adequate compensation. Since the airline manager does not know the actual cost of the antiques, he offers the two travelers the opportunity to write down separately on a piece of paper the true cost of the antiques, which is restricted to be any number between 2 and The airline manager states the following compensation rules: a If traveler i writes down a larger number than traveler j, i.

Letting n1 and n2 be the actions of the players, answer the following questions:. You need a Premium account to see the full document. Oz Shy also provides a very nice introduction to topics which are typically not covered appoications other texts, namely the basic results about compatibility and the choice of standards as well as a concise analysis of specific industries. Given that the price in market 2 is the same under price appications and without it, we can conclude that in this example, price discrimination is Pareto superior to nonprice discrimination, since both consumer surplus and the monopoly profit are higher under price discrimination.

Also, note that the outcomes 0, 0 and 1, 1 are also NE outcomes in mixed actions and are the same as the pure NE outcomes. I believe that the above can be covered in 2 lectures, or in three hours of instruction. More about this item Statistics Access and download statistics. It is important that the student will understand that a game is not properly defined unless the list of players, the action set of industria player, and the payoff functions are clearly stated.

My library Help Advanced Book Search. Technology Production Cost and Demand. Prepared for the first printing industfial the first edition bkman Welfare comparisons among outcomes Definition 2.

Clearly, the competitive price is lower and the aggregate production is higher when the industry consists of two firms. For example, when you first encounter a discussion of returns to scale, I urge you to make a formal definition and refer the students to Definition 3.

The goals of this manual are: More about this item Statistics Access and download statistics Corrections All material on this site has been provided by the respective publishers and authors. Hence, by Proposition 3.



0コメント

  • 1000 / 1000