Explore BrainMass

Explore BrainMass

    ANALYTICAL HIERARCHY PROCESS is Used in Decision Making

    Not what you're looking for? Search our solutions OR ask your own Custom question.

    This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!

    Please see the files below, thank you!

    CB600 Game Theory

    Game Theory III

    7. Bimatrix Games
    We have encountered various ways of solving constant-sum games in the previous sections. Now we turn to the case where the sums of the pay-offs of the two players do not add up to a constant. In this case, we cannot represent the pay-offs of the players using one matrix and we shall use two separate matrices for the two players. Hence, finite two-player games are called bimatrix games. These are more common in practice than matrix games, but are also more difficult to solve. If a bimatrix game is constant-sum, then we can represent and solve it as a matrix game and so from now on we exclude this possibility.

    The most famous bimatrix game is the Prisoners' Dilemma. Two people are in prison, suspected of a bank robbery. The prosecutor is convinced of their guilt but has not got enough evidence for a conviction. She asks each of them to testify against the other. If only one prisoner testifies, he is set free and the other one gets ten years. If they both testify, they both get a reduced sentence of eight years. If neither of them squeals then they both spend a year in prison for carrying a firearm without a permit. This is not a constant-sum game because the sum of their prison sentences varies from 2 to 16.

    Consider the example of the two competing television networks from section 3 and modify it as follows. The networks still have 10 million potential viewers between them but if both networks show the same genre, then 1 million of them will switch the telly off and listed to radio instead. This then is not a constant-sum game because the sum of the payoffs is 10 million for some situations and 9 million for others.

    Just like matrix games, bimatrix games can be solved using either pure or mixed strategies. Solutions in pure strategies may or may not exist, but a solution in mixed strategies is guaranteed to exist. This guarantee is known as Nash's Theorem. Again, we do not prove this here. In any case, we shall restrict our attention to pure strategies only.

    8. Solution of Bimatrix Games in Pure Strategies
    The pure strategies solution, just like for matrix games, is based on the concept of an equilibrium point. Let us denote the pay-off matrices of players 1 and 2 by A and B, and let their entries be aij (=P1(i,j)) and bij (=P2(i,j)) respectively. Then k, l is a solution in pure strategies (or an equilibrium point) if akl  ail for all i and bkl  bkj for all j. Just like for matrix games, no player can benefit from a unilateral change in strategy away from this point. The value of the game is (akl, bkl). In practical terms, this means that we find the largest values of each column of matrix A and the largest values of each row of matrix B. If there are any points which are corresponding to these values in both matrices, then they represent a solution in pure strategies to the game. If no such points exist, we have no solutions in pure strategies. Note that solving matrix games in pure strategies can be viewed as a special case of solving bimatrix games in pure strategies.

    While solutions in pure strategies represent a possible solution concept, there are some serious drawbacks to this concept. For certain games, payoffs in two equilibrium points may be completely different (see the game of "chicken" later on). It may also be that the equilibrium point is not advantageous to both players and co-operation between the two players can result in better pay-offs for both of them than following their solutions in pure strategies (e.g. in the prisoner's dilemma).

    Let us solve the Prisoner's Dilemma using this concept. The payoff matrices are given below. (Note that the rewards are negative - the less time in prison the better.)
    payoff of prisoner 1 prisoner 2 payoff of prisoner 2 prisoner 2
    testify be silent testify be silent
    prisoner 1 testify -8 0 prisoner 1 testify -8 -10
    be silent -10 -1 be silent 0 -1
    Thus, the equilibrium point implies that both prisoners should testify. Neither players can benefit from a unilateral change from this strategy - but if they are able to cooperate, they can improve on this by making a pact of staying silent.

    Another example where cooperation can beat the pure strategies solution concerns the arms race between two nations. Each of them can develop a new missile, costing 10 billion $. If neither of them develop it or both of them do, the balance of power will maintain the peace. If only one nation develops the missile, she will conquer the other. The war will bring a revenue of 20 billion $ to the winning nation and a devastation of 100 billion $ to the losing one. The pay-offs are shown below:
    payoff of nation 1 nation 2 payoff of nation 2 nation 2
    develop not develop not
    nation 1 develop -10 10 nation 1 develop -10 -100
    not -100 0 not 10 0
    The equilibrium point is for both nations to develop the missile. Note that the two countries would be better off cooperating and not developing the missile but fear and mistrust of the other leads to an arms race.

    Let the payoff matrices for the two competing TV networks be as follows:
    payoff of network 1 network 2 payoff of network 2 network 2
    west. soap com. west. soap com.
    net-work 1 western 3.0 1.5 6.0 net-work 1 western 6.0 8.5 4.0
    soap 4.5 5.3 5.0 soap 5.5 3.7 5.0
    comedy 3.8 1.4 6.5 comedy 6.2 8.6 2.5
    The equilibrium point is for network 1 to show soap and network 2 to show western. The value of this game is 4.5 to network 1 and 5.5 to network 2.

    The following game is known as "chicken" - don't try this at home! The two players are driving towards each other at speed. The one to swerve is the "chicken" who lost his nerves. The payoff function is pretty self-explanatory!
    payoff of driver 1 driver 2 payoff of driver 2 driver 2
    swerve drive on swerve drive on
    driver 1 swerve 0 -1 driver 1 swerve 0 1
    drive on 1 -100 drive on -1 -100
    (Of course, if players place no value on their lives then this is a zero-sum game.) There are two equilibrium points: where one player swerves and the other does not. However, this is of no use: the two equilibrium points give different pure strategies! (Look back to section 3 and convince yourself that this cannot occur for matrix games.)

    Finally, an example with no solution in pure strategies. Let the payoff matrices be and , respectively. This game has no solution in pure strategies.

    For further details, see section 14.4 of Winston.

    CB600 Game Theory

    Game Theory IV

    9. Introduction to n-Person Game Theory
    We have so far concentrated on two-person games; this section extends our study to games that may involve several players. Such games are more intricate than two-person games, since they allow sets of players to form coalitions. Thus, such games are sometimes also called co-operative games. (Note that it is possible to have a co operative game involving only two players.) If we do not allow co-operation between the players, then we can extend the pure and mixed solution concepts of the previous sections to n-person games. Nash's Theorem will still guarantee for such games the existence of a solution in mixed strategies. However, there is no general algorithm to find such solutions. Henceforth, we shall focus on co-operative games.

    Although n-person games can be formulated using the standard framework given in section 1, we may find the following concept of a characteristic function useful. For each subset S of I (the set of players), the characteristic function v of a game gives the amount v(S) that the members of S can be sure of receiving if they act together and form a coalition. We define v(}, the value of the empty set, to be zero.

    Assume that the prisoners in the prisoners' dilemma game are able and willing to cooperate. In this case the characteristic function of the game is
    v{} = 0, v(1) = v(2) = -8, v{1, 2} = -2
    as by working together the players can achieve a combined reward of only two years in prison. We can extend this game for three prisoners, the testimony of any one of which is sufficient to send the other two to jail. Observe that all three of them need to cooperate to avoid lengthy prison sentences. Hence, the characteristic function is
    v{} = 0, v(1) = v(2) = v(3) = -8, v(1, 2) = v(1, 3) = v(2, 3) = -16, v{1, 2, 3} = -3.

    Consider the following 3-player game. Joe invented a new drug and can sell its formula to one of two companies. That company will then split a million-dollar profit with Joe. Let Joe be player 1 and the two companies players 2 and 3. The characteristic function of this game is then (in million dollars)
    v{} = v(1) = v(2) = v(3) = v{2, 3} = 0, v(1, 2) = v(1, 3) = v{1, 2, 3} = 1.

    In another 3-player game, player 1 owns a piece of land worth £10000 and can sell it to player 2 or player 3. Players 2 and 3 are developers who can increase the worth of the land to £20000 and £30000 respectively. The characteristic function is
    v(1) = 10000, v(1, 2) = 20000, v(1, 3) = v{1, 2, 3} = 30000, otherwise v{S} = 0.

    Of course, we can have more than three players. Consider the voting game. Let I be a committee of n members and let the utility of passing a resolution to a set of players S be 1 (not passing it has utility 0). The characteristic function of this game is
    v{S} = 0 if S n/2 and v{S} = 1 if S> n/2.

    For two subsets of players A and B, where A and B have no players in common, the characteristic function must satisfy v(AB)  v(A) + v(B). This property of the characteristic function is known as superadditivity. Clearly, there would be no point in forming coalitions is this property was not true!
    Just like matrix and bimatrix games could be solved in terms of different solution concepts (namely pure and mixed strategies), so n-person games have also a number of different solution concepts. However, they can all be expressed using a common notation, that of a reward vector. A reward vector x = {x1, x2, ..., xn} is a vector such that player i receives a reward (or pay-off) xi. This reward vector has to satisfy certain conditions of rationality as follows.
    - Group rationality v(I) = iI xi
    (players are dividing the total reward v(I) between themselves)
    - Individual rationality xi  v({i}) (for all i  I)
    (everyone gets at least as much as they could get on their own)
    If a reward vector satisfies the above conditions, it is called an imputation.

    In the next two sections, we shall look at two solution concepts for n-player games, namely the core and the Shapley value.

    For further information, see Winston (section 14.5).

    10. The Core of an n-Person Game
    Before defining the core, the concept of domination has to be understood. We say that the imputation y = (y1, y2, ..., yn) dominates x through a coalition S, if iS yi  v(S) and for all i  S, yi > xi. Therefore y is attainable by members of S and would be preferred to x by them. Then, following Neumann and Morgenstern, we define the core of an n-person game as the set of all undominated imputations.

    However, instead of determining the core using this definition, we can rely on the following Theorem.
    An imputation x = {x1, x2, ..., xn} is in the core of an n person game if and only if for S  I, iS xi  v(S).
    This means that the imputation x is undominated and is thus in the core, if and only if for every coalition S, the total of the rewards received by the players in S is at least as large as v(S). Thus, by considering the set of inequalities defined by this theorem, together with the group rationality equation and the individual rationality inequalities, we can find the core.

    Let us find the core of the games presented in the previous section.
    - For the (2-person) prisoner's dilemma, -8  x1  6, x2 = -2-x1.
    - For the drug game, x1 = 1, x2 = 0, x3 = 0.
    - For the land development game, 20000  x1  30000, x2 = 0, x3 = 30000-x1.
    - The core of the voting game is empty for n3.

    We note that the above solution concept has the advantage of reflecting well the idea of coalitions (through the idea of domination), however in practice it has some disadvantages, such as:
    - the core may be empty, meaning that there is no solution;
    - it usually does not consist of a unique imputation, meaning that there may be an infinite number of solutions; and
    - it is based on the characteristic function, i.e. there is no concept of "threat".

    For further information, see Winston (section 14.6).

    11. The Shapley Value
    This solution concept is based on the idea of players "adding value" to a coalition. Shapley proposed a solution to co-operative games consisting of a reward vector x, given in the following Theorem.
    Given an n-person game with characteristic function v, there is a unique reward vector x = {x1, x2, ..., xn}, called the Shapley value solution, where the reward of the ith player (xi) is given by
    xi = S, iS pn (S)[v(S{i})-v(S)]
    where pn (S) = and where is the number of players in S.

    Although the formula above may seem complex, it has a simple interpretation. Imagine that the players arrive in a random order to a "meeting place" to form coalitions. If player i forms a coalition with the players who are already present when he/she arrives, he/she adds v(S{i})-v(S) to the coalition S. The probability that when player i arrives the players present are the coalition S is pn(S). Thus, the Shapley value solution concept implies that the reward of player i should be the expected amount that he/she adds to the coalition made up of the players who are present when he/she arrives. We note that if n is reasonably small, an alternative - and much simpler - way of calculating the Shapley values is to use this observation. For each possible way players can arrive we calculate the value added by each player. Then we take the average value added by each player and this gives us the appropriate rewards xi.

    Let us calculate the Shapley values for the examples given in section 9.
    - For the prisoners' dilemma, x1 = x2 = -1 (2 players) or x1 = x2 = x3 = -1 (3 players).
    - For the drug game, x1 = 2/3, x2 = x3 = 1/6.
    - For the land development game, x1 = 21666, x2 = 1666, x3 = 6666.
    - For the voting game (trivially), xi = 1/n.

    The advantages of the Shapley value solution concept are :
    - it always exists;
    - there is only one solution, i.e the solution consists of a unique imputation;
    - it works in practice.
    However, it has some disadvantages:
    - it is also based on the characteristic function and thus it has no "threat" concept;
    - it can be defeated by coalitions, meaning that actual coalitions may be able to achieve a higher value than the one given by the Shapley values.

    For further information, see Winston (section 14.7).

    12. Introduction to Hypergames
    A fundamental assumption of noncooperative Game Theory is that although players do not know what the actions of the other players are, they do know both the set of strategies available to them and the payoffs they receive for each outcome.

    The hypergame model does not make this assumption. This means that each player formulates the game as he/she sees it, and plays accordingly - but different players may perceive different games.

    Consider the arms race example of section 8 and assume that although both nations are peace-loving and would not attack the other, they believe the other nation wishes to attack them. These are the payoff matrices as seen by nation 1.
    payoff of nation 1 nation 2 payoff of nation 2 nation 2
    develop not develop not
    nation 1 develop -10 -10 nation 1 develop -10 0
    not -100 0 not 10 0
    These are the payoff matrices as seen by nation 2.
    payoff of nation 1 nation 2 payoff of nation 2 nation 2
    develop not develop not
    nation 1 develop -10 10 nation 1 develop -10 -100
    not 0 0 not -10 0
    So, there is no solution in pure strategies - the nations may still enter an arms race.

    We could extend the prisoners' dilemma example to a hypergame. Assume that one of the prisoners has a death threat against him. He would prefer a long prison sentence since if he walks free now or in a year's time he will be gunned down. He hopes that after eight or ten years things will have cooled down and he will not be killed. The other prisoner does not know about this.

    It may be that a player has a strategy the other does not know about. What if in the TV networks example of section 3 one of the networks may schedule a reality show, while the other does not believe that this may happen?

    We could extend the prisoners' dilemma example to such a hypergame. Assume that the godfather of one of the prisoners promised him to spring him from prison if need be. Whether or not he is sent to jail he can escape and flee with the booty to Rio!

    For more information, see Chapters 12 and 13 of J. Rosenhead (ed.): Rational Analysis for a Problematic World!

    A similar extension to game theory is the metagame model. We do not cover this here but Chapters 10 and 11 of J. Rosenhead (ed.): Rational Analysis for a Problematic World prove a good overview.

    Game Theory V

    13. Multiattribute Decision Theory: The Analytic Hierarchy Process

    In the problems we encountered so far, the decision-makers made a choice based on how each possible action affected a single variable (or attribute), namely their payoff. However, in many situations, there are many attributes which must be taken into account. Problems of this nature are called multiattribute decision problems. For simplicity, we shall restrict ourselves to a single player only.
    In this module, we shall look only at the Analytic Hierarchy Process which can be used to find optimal decisions in multiattribute decision problems. However, there are other applicable solution methods: three particularly important topics are multiattribute utility theory, goal programming and Pareto optimality. (See Winston Ch. 14 for more details on these.)

    The Analytic Hierarchy Process (AHP), developed by Thomas L. Saaty, aim to help a decision-maker who has to choose between alternatives (strategies) on the basis of how well the alternatives meet various objectives. For example, in choosing a job, objectives may be (1) salary, (2) quality of life in the city where the job is located, (3) interest in type of work and (4) distance of job location to family and friends. In the following, we give an overview of this method.
    1. Determine the weight wi of each objective i, with weights adding up to 1. See later for a detailed explanation of this procedure.
    2. Determine the score sji of each alternative j with respect to objective i. This method will also be explained later in detail.
    3. Determine the overall score of each alternative j as i wi sji.
    Using this procedure, one can select the alternative with the highest overall score.

    Obtaining Weights for Each Objective
    We now show how to calculate the weights wi needed in step 1 of the AHP. For n alternatives we create an nn matrix A, called the pairwise comparison matrix. The entry aij indicates how much more important objective i is than objective j. "Importance" is on an integer 1 to 9 scale, with 1 indicating objectives of equal importance while 9 indicating that one objective is a lot more important than the other. Obviously aii = 1, and we also observe that aji = 1/aij.
    In our example, let the pairwise comparison matrix be .
    To find the weights of each objective, we use the following procedure.
    1. Divide each entry in column j of A by the sum of the entries in column j. Doing this for every column of A gives a new matrix Anorm with the sum of entries in each column being 1.
    In our example, Anorm = .
    2. Let wi be the average of entries in row i of Anorm.
    In our example, w1 = .5115, w2 = .0986, w3 = .2433 and w4 = .1466.
    We note that this is in reality only an approximation of the weight given to objective i. However, wi proves to be a good approximation in practice.

    Checking for Consistency
    There is also a procedure to determine whether the pairwise comparison matrix is consistent, i.e. whether the decision-maker's comparisons are consistent with each other.
    1. Multiply A by [w1, w2, ... , wn]T.
    In our example, AwT = [2.0775, 0.3959, 0.9894, 0.5933]T.
    2. Divide the ith entry in the resulting matrix (vector) by wi for all i.
    3. Calculate the average of the matrix (vector) thus gained to get dmax.
    In our example, dmax = 4.05.
    4. Calculate the consistency index (CI) as CI = (dmax-n)/(n-1), where n is the number of alternatives. In our example, CI = 0.017.
    5. Divide CI by RI, the random index. RI is tabulated for values of n as follows: RI(2) = 0, RI(3) = 0.58, RI(4) = 0.90, RI(5) = 1.12, ...
    If the value of CI is sufficiently small, then the decision-maker's comparisons are probably consistent enough to give useful estimates of the weights for their objective function. If CI/RI < 0.1, the degree of consistency is deemed to be satisfactory.

    Finding the Score of an Alternative for an Objective
    Now we show how to calculate the scores sji for each alternative j with respect to objective i, as is needed in step 2 of the AHP. Similarly to the pairwise comparison matrix for objectives, we can create pairwise comparison matrices Bi for alternatives. For each objective i, we create a matrix Bi showing the decision-maker's preferences for alternatives with respect to objective i. The entry bij in matrix Bk indicates how much more preferable alternative i is to alternative j, with respect to objective k.

    Let B1, the comparison matrix for salary be for the three jobs on offer.
    We follow a similar procedure to the one of calculating the weights in order to calculate the scores.
    1. Divide each entry in column j of Bi by the sum of entries in column j. Doing this for every column of Bi gives a new matrix Binorm.
    2. Let sji be the average of entries in row j of Binorm.
    In our example, we get s11 = .571, s21 = .286 and s31 = .143.
    Having found the values sji we can use them in conjunction with the weights wi to find which alternative is the most preferred one overall. We also note that the consistency of the pairwise comparison matrices for alternatives can be established in a similar manner to that show above for the matrix for objectives.

    Final calculations
    Following the above procedure, we can summarize all weight and scores in a table such as the one below and find the overall scores for each job i wi sji:
    objective weight job1 job 2 job 3
    salary .5115 .571 .286 .143
    quality of life .0986 .159 .252 .589
    interest in work .2433 .088 .669 .243
    nearness to family .1466 .069 .426 .506
    overall score .339 .396 .265
    Hence, the job-seeker should choose Job 2.
    For more information on this topic, see Winston (section 13.7).

    © BrainMass Inc. brainmass.com December 24, 2021, 11:55 pm ad1c9bdddf


    Solution Summary

    An examination of Analytical Hierarchy Process to determine the best choice for the President to determine the host site for an upcoming conference.