Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An application of Markov chain model in board game revised
(USC Thesis Other)
An application of Markov chain model in board game revised
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
An Application of Markov Chain Model in Board Game Revised Haizhou Liu August 2016 A thesis presented for the degree of Master of Science in Applied Mathematics Department of Mathematics Supervisor:Neelesh Tiruviluamala Thesis Committee:Sergey Lototsky(Chairman) Neelesh Tiruviluamala Mike Zyda UNIVERSITY OF SOUTHERN CALIFORNIA Abstract An Application of Markov Chain Model in Board Game Revised Haizhou Liu Abstract In this thesis, we explored the use of Markov chain model and Monte-Carlo simulation to analyze and tune the prototype of a chance-based board game. The ultimate design goals of game length and game play complexity are set. In order to achieve the goals we also introduce a method to transform the original Markov chain into a new chain to analyze game process and results. Based on the results, we tuned the prototype by adding new conditions and tweaking some critical parameters. Some minor modications in the rules are necessary as well.In the end, we nalize our model and veried the results. 2 Contents 1 Introduction 7 1.1 Rules and Gameplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Design Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Monte-Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Markov Chain Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Markov Chain Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Game Analysis 13 2.1 Original Prototype Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.1 Distribution of Moves and Probabilities . . . . . . . . . . . . . . . . . . . . . 13 2.1.2 Expected Square and Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.3 The Expected Cost When Game is Finished . . . . . . . . . . . . . . . . . . . 18 2.1.4 Drunken Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.5 Cost on the Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1.6 Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3 Board Extension 30 3.1 New Board Imported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2 Analysis of the board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Game Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4 Summary and Future Work 39 Bibliography 41 3 List of Figures 1.1 The Initial Game Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Game Process with corresponding probabilities . . . . . . . . . . . . . . . . . . . . . 10 2.1 Probability of nishing game in 50 moves . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Tabular Data of Game Length Simulation . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Distribution and Boxplot of Game Length Simulation . . . . . . . . . . . . . . . . . 16 2.4 Distribution of Square landed in 5, 10 Moves . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Distribution of Square landed and Cost at 15th Moves . . . . . . . . . . . . . . . . . 17 2.6 Expected Square Landed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.7 Estimated Cost in Total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.8 Expected Square Landed and Estimated Cost in Total . . . . . . . . . . . . . . . . . 18 2.9 Average and Max cost when game is completed . . . . . . . . . . . . . . . . . . . . . 20 2.10 Original Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.11 Original Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.12 Revised Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.13 Game Board with the 7-4 Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.14 Markov Chain with the 7-4 Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.15 Matrix M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.16 Cost at Each State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.1 The Extended Game Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 The Transition Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3 Distribution of Game Length and Probability of Number of Moves When Game is Finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4 Boxplot of Moves and Cost When Game is Finished . . . . . . . . . . . . . . . . . . 33 3.5 The Extended Visits on Each Square . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.6 The Extended Game Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.7 The Extended Game Board with 6 Portals . . . . . . . . . . . . . . . . . . . . . . . . 35 4 3.8 The Extended Game Board with 8 portals and Black Hole 27 . . . . . . . . . . . . . 35 3.9 The Extended Game Board with 8 portals and Black Hole 27 . . . . . . . . . . . . . 36 3.10 Cost of States of the New Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.11 Colored Board Showing the Times of Visits to Each State . . . . . . . . . . . . . . . 37 5 List of Tables 2.1 Result of Math Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Result of 10000-player Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Expected Visits to Each State From Square 1 . . . . . . . . . . . . . . . . . . . . . . 28 3.1 List of Parameters in 3x3 Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Cost on State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3 List of Parameters in Final Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6 Chapter 1 Introduction In many popular board games, probabilistic reasoning plays a critical role . For example in Monopoly, the basic quantity of interest for strategic purposes is the probability of being on any of the 40 positions in a given turn.In another chance-based board game Chutes and Ladders, players are interested in the probability of nishing the game in certain moves. Understanding of mathematical model behind these games becomes so important in terms of game design since any modication in rules and parameters could change the whole experience.In this paper we are exploring a method to analyze the prototype of a chance-based board game. In chapter 1 we will introduce the initial version of our game prototype with our ultimate design game. Then we will address the problems we are interested in and introduce the Markov Chain model and Mento-Carlo simulation to solve these problems. At the end of this chapter, an analysis will be given to see if the results are acceptable to make a smooth and long-enough gameplay. The rest of this paper is organized as follows. In Chpater 2 we will discuss an extension of the original version. In order to meet our goal, we added new rules based on the rst prototype to add complexity, and change some parameters such as the transition matrix. Then we analyzed our new game, comparing the result to our goal and continued tuning the game.In chapter 3, we will demonstrate the rule and result of our nal version. 1.1 Rules and Gameplay Suppose we have a game played on a 3x3 gridded board which is numbered sequentially from 1, in the lower left corner, to 9, the center (the winning grid). Players take turns to move their pawns forward the wining state-9. A representation of the board is shown as Fig 1-1. Here we assume two player are playing. All players start o the board.On each turn, players roll a 6-side dice that determines how many steps they will go, moving the corresponding number of square. The steps of moving is described as follows: 7 1. Roll 1,2: 1 step backward (moving towards 1); 2. Roll 3,4,5 : 1 step forward (moving towards 9); 3. Roll 6: 2 steps forward(moving towards 9). Figure 1.1: The Initial Game Board In digital games, the dice can be replaced by a 'Go' button, thus it would be easier for designer to tweak the probability of going from one spot to another.Each move costs some money based on which position player is on. At the beginning of the game each player will be given amount of money. The rst player to land on the center grid wins. A losing condition is applied if a player is declared bankrupt before reaching the center. 1.2 Design Goal As designer we are responsible of providing a smooth and pleasant experience, which means we want our game to be accomplished in a reasonable time, while having a certain degree complexity. Therefore we set our goal as: 1. Assume each player spends 7 seconds on each turn, which means 14 seconds for two players in a round, the expected time to nish the game is 12 minutes, which means the game should be nished in 50 moves. 2. Design the initial asset for each player so that 75% of all players can nish the game before bankrupt. 3. Exploring the consequence of involving a dead end on the board, which means whenever the player lands on a certain grid, she is declared bankrupt immediately. If we modify our rule or bankrupting that every time when a player is declared bankrupt, she will be moved to the starting point, the rst grid, and her asset is reset, then how long an average game take? 8 4. If the results fail to satisfy the goal above, then we modify the game by adding or changing some rules, or tweaking some parameters. 5. Continue the iteration process until we are happy with the results. Next we will introduce the methods we use to evaluate our game, and addressed questions we are interested in. 1.3 Method Because of the cyclic nature of the game since sometimes the dice will send you backwards, and after we add backward portals, there's no theoretical upper bound to the number of moves a game can take. When we have a system like this that we want to calculate the probability of some event about (winning, losing, length of game), however, we do have two basic mechanism to calculate these probabilities-experimentation and formal modeling. A similar method is used in analyzing the board game Snakes and Ladders[2]. 1.3.1 Monte-Carlo Simulation In experimentation, we simply repeat an experiment many times and record the relative frequen- cies. More likely events happen more often, less likely events less often. This is the core concept of Monte-Carlo simulation introduced by Nicholas Metropolis[3].Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results[4].In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable[9]. The reason we are using Monte-Carlo simulation is because it's easy to write. They enable you to obtain results without necessarily understanding the internal workings or subtleties of your mechanism. One important application of Monte-Carlo simulation in game design could be seen in social casino game App. If we are creating a fun mini-casino game for the large social casino game (which uses a virtual currency like gems or chips),if we got the odds for the mini-game wrong, and the contest was too loose, we would rapidly ood your economy with currency causing devaluation and rampant in ation[1]. This could have disastrous consequences. Using a Monte-Carlo simulation we could tweak the parameters and rules of the game, run a few millions test games, analyze the results, then adjust again until we felt happy with the outcome. 9 1.3.2 Markov Chain Model A discrete time, nite state Markov process (also called a nite Markov chain) is a system having a nite number of attitudes or states, which proceeds sequentially from one state to another, and for which the probability of passing from state i to state j is a number p ij which depends only on i and j, and not, say, on the history of the system in attaining state i. We shall only be concerned here with the situation in which p ij does not depend on the time when the transition from i to j is made. In this case, the chain is said to have stationary transition probabilities[5]. Games like our prototype are ideal candidates for Markov Chain analysis because, at any time, the probability of events that will happen in the future are agnostic about what happened in the past. If a player is on grid square 9 of the board, the probability of what will happen on the next roll is independent on how the player got to square 9. It is easy to see how our game diers for instance, from a single-deck game of Blackjack at a casino. In Blackjack, the probability that events will occur in the future, and thus what your optimal strategy is, is dependent on the cards that have already been played { it is for this reason that card counting works. At the heart of Markov Chain analysis is the concept of a Stochastic Process. This is just a fancy word to say that, from a given state, there are a series of possibilities that could happen next, dened by a probability distribution. (Implied in the denition is that all the probabilities add up to 1). In our game, a player can be on a particular square. We don't care how they got there, we just know that they roll the die again and act based on the results of the roll.If a player is at grid square G when he rolls again, one of three things could happen (with corresponding probability), and based on these probabilities the player would advance to one of the next squares or move one step backward as shown below. Figure 1.2: Game Process with corresponding probabilities These probabilities can be represented as a sparse matrix which records the probability of moving from Grid i to Grid j by the entry in row-i and column-j, which is dened as the transition matrix 10 P. Now we could present our game as a Markov Chain Model, and address the problems. 1.4 Markov Chain Game LetfX 1 ;X 2 ;:::;X 9 g be a sequence of random variables which takes value in the state space. If Xn = i, then we say that the chain is in the i th state at the nth step. Let p ij be the probability of transitioning from i to j. Note that the (i;j) th element of the transition matrix is just p ij . The evolution of a chain is described by its 'transition probabilities' p (X n+1 =jjX n =i)[7]. Since in this case it satises the Markov condition: p(X n+1 =jjX n =i;X k =i k ;k = 1;2;:::;n1) =p(X n+1 =jjX n =i) =p (n) ij for all n 1 and all X 1 ;X 2 ;:::;X n . ThereforefX n g is a Markov Chain by denition. And we have p ij 0 (i;j = 1;2;:::;9); P n j=1 p ij = 1 (i = 1;2;:::;9) The transition matrix P in our case is presented as below: P = 0 B B B B B B B B B B B B B B B B B B B B B @ 1 3 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 2 3 0 0 0 0 0 0 0 0 1 1 C C C C C C C C C C C C C C C C C C C C C A In the next chapter we will analyze our game by discuss the following questions based on our Markov Chain model: 1. The probability that a single player nishes the game (from 1 to 9) in 50 moves. 2. The expected moves the game takes. 3. Assuming there's cost each time player moves, and the cost is associated with the location, meaning when the player is on grid ready to move, no matter which direction she goes, she pays for it. Suppose the cost vector isc = (c 1 ;c 2 ;:::;c 8 ). If a player has 20 moves, what is the expected location in the end? And what the average cost to get to that location. 4. The expected cost when the game is accomplished, which helps us to design the initial asset. 11 5. The expected number of times that we are in a non-absorbing state and the corresponding cost. What is the probability of reaching absorbing state j, assuming we start at state i? What if we start at state m(1<m< 9) with two absorbing states 1 and 9? 6. Now suppose the cost is associated with the edge instead of state, meaning when the player is at location I, the cost of moving forwards is c i1 , backward isc i2 .Then we are interested in the expected cost when the game is accomplished. 12 Chapter 2 Game Analysis 2.1 Original Prototype Analysis 2.1.1 Distribution of Moves and Probabilities We are analyzing the game process by looking at the distribution of moves when the game is nished, and the distribution of probabilities of nishing the game in N moves. First, we are interested in the probability that a single player nishes the game (from 1 to 9) in 50 moves. The probability of using k steps to state j starting from state i is p (k) ij =p(X k+1 =jjX 1 =i) According to Chapman-Kolmogorov equation: p ij (m;m+n+r) = P k p ik (m;m+n)p kj (m+n;m+n+r) Therefore, p (n) =p (m) p (nm) and P (n) =P (n1) P =P n Sop ij in the matrixP n is the probability we want. Next, in order to check if the game length is acceptable, we are going to nd math solution of the expected moves. LetE i be the expected numbers of steps to reach an absorbing state from state i. Then E i satises: E i = 1+p i1 E 1 ++p ij E j ++p in E n = 1+ P n j=1 p ij E j () The 1 is the cost of making a single transition, and the sum corresponds to the probability of transitioning to some state j times the expected number of steps needed to reach an absorbing 13 state from state j. If state i is absorbing, then E i = 0. For example in our case E 9 = 0. After rearrange the equation (*) we have: E i P n j=1 p ij E j = 1 We also have P be the transition matrix, and let E be the vector of expected numbers of steps, then (IP)E = 1 Where I is the identity matrix and 1 is the column vector of all 1s. We then get the owing result by using Matlab: Result:Math Solution starting state: 1 target state: 9 max steps: 50 Math solution: steps n= 50 row:1 column:9 .p= 0.998002. Figure 2.1: Probability of nishing game in 50 moves Since there is no player-player interaction (even by assumption two players are on the same board), when using Monte-Carlo simulation we only need to consider the movement of one player. The result of simulation is showed below: Result: simulation Simulation: 10000 players are involved in the test. 9991 of them reached the target state within 50 steps. The ratio is 0.9991. According to the result, almost all the player can nish the game in 50 moves. In fact, 85 percent player nish the game in 20 moves. We also extend moves from 50 to 100 to get a more thorough analysis from the simulation data. Descriptive statistics shows that the expected length of nishing the game is 14.74(math solution: 14.7365; simulation: 14.74),standard deviation is 7.35; half of players nished the game in 13 14 Result Type Math Solution Start State 1 Target State 9 Max Steps Input 100 Expected Steps 14.7365 Table 2.1: Result of Math Solution Result Type Simulation Start State 1 Target State 9 Max Steps Input 100 Numbers of Player 10000 Max Moves When Game is Finished 71 Min Moves When Game is Finished 4 Average Game is Finished 14.74 Median 13 standard deviation 7.35416. Table 2.2: Result of 10000-player Simulation Figure 2.2: Tabular Data of Game Length Simulation moves, 75% players nished the game in 18 moves, the longest game lasted 71 moves while the shortest is 4 moves. This result indicates that the average game length is mush shorter than we expected. Thus we should take into account the size of the board we modify the game. 2.1.2 Expected Square and Cost Now we want to know the expected total cost once we land on a square for the rst time. The very rst step is to gure out the expected square at n th move, which is quite obvious. Letp (n) ij be the probability of transitioning from i to j at n th move. The let Y be the expected square we want, then we have: Y = P N i=1 ip (n) ij SinceP n is probability matrices, the sum of each row of P n is 1. So from any state before 9, no matter how many times the player upgrades sword, the average level it could get is less than 9. Now let's compute the average cost C (n) x , from level x using n steps we have: 15 Figure 2.3: Distribution and Boxplot of Game Length Simulation The cost for 1 move: C 1 x =c(x) =I x ~ c The cost for 2 moves: C 2 x =I x ~ c+I x P~ c The cost for 3 moves: C 3 x =I x ~ c+I x P~ c+ P N i=1 P(x;i) P N j=1 P(i;j)c(j) = I x ~ c+I x P~ c+I x P 2 ~ c The cost for 4 moves C 4 x =I x ~ c+I x P~ c+I x P 2 ~ c+ P N i=1 P(x;i) P N j=1 P(i;j) P N k=1 P(j;k)c(k) = I x ~ c+I x P~ c+I x P 2 ~ c+I x P 3 ~ c . . . Hence the cost for n moves is: C n x = (I x P n i=1 P i1 )~ c Result Result Type Math Solution Simulation Start State 1 1 Number of Moves 15 15 Expected Square 7.70625 7.76 Average Cost 23.8029 24.7231 Standard Deviation 2.09204 16 Figure 2.4: Distribution of Square landed in 5, 10 Moves Figure 2.5: Distribution of Square landed and Cost at 15th Moves According to the simulation, the expected square will almost surely be 9 after 30 moves. And the estimated cost in total is approximately 30 in 30 moves. When the games is over, the average cost in total in approaching 31 as the maximum moves increases. Figure 2.6: Expected Square Landed 17 Figure 2.7: Estimated Cost in Total Figure 2.8: Expected Square Landed and Estimated Cost in Total 2.1.3 The Expected Cost When Game is Finished This is an extension of the previous problem. In 2.1.3 the assumption is the number of moves is given. But in this section we want to know the cost when is game is nished. It is easy to get the result using Monte-Carlo method mentioned in 2.1.3. But the math solution could be slightly dierent. First, let the probability from state x to state y at k steps be p (k) x!y , which means no visit to y 18 during the rst k-1 steps. So we have: p (k) x!y =I x (PE y ) k1 PI y Then the expected steps from x to y is: T x!y = P 1 k=1 kp x!y (k) =I x h P 1 k=1 k(PE y ) k1 i PI y We can decompose PEy as follows: PE y =AJA 1 Thus we have: T x!y =I x A P 1 k=1 kJ k1 A 1 PI y Because of the structure of J, in most situation P 1 k=1 kJ k1 can be easily computed. Now we are going to get the total cost.Let r = (r 1 ;r 2 ;r 3 ;:::;r N )be the expected steps to each state, then the expected cost from level x to level y is: C x!y = P N i=1 c i r i =cr Obviously, T x!y = P N i=1 r i Since: P 1 k=1 p x!y (k) =I x h P 1 k=1 (PE y ) k1 P i I y = 1 so r =I x nh P 1 k=1 (PE y ) k1 i P +I o E y Specically, ifIPE y is invertible, we can directly compute the result of P 1 k=1 (PE y ) k1 Finally we have: 19 r =I x h (IPE y ) 1 P +I i E y and then get the expected cost C x!y =cr. Result Result Type Math Solution Simulation Start Square 1 1 Targeting Square 9 9 Average Cost 30.7515 30.7425 Figure 2.9: Average and Max cost when game is completed The results from both math solution and Monte-Carlo simulation show that the average cost when game is completed is approximately 30.75. The simulation also indicates that the total cost is close to the average once the maximum move is bigger than 25. Meanwhile, in the worst situation, a player could pay as many as 188 gems when is game is completed comparing to the average 30.75. 2.1.4 Drunken Walk In this section we will explore a little bit more on absorbing states. According to the previous assumption our game ends only if the player lands on square 9.In we are adding another condition, say, there's a 'black hole' square on the board that landing on this square will also end the game. We would like to know how this additional square will aect the average moves and cost. Now before we start adding the black hole, rst we will introduce another useful math method to solve the problem of average moves and cost. Additional, if we would like to know not only the average moves and cost, but also how many times the player visited each square, what's the cost on each square, this method could lead us to the answers. This method is used to solve the absorbing Markov Chain problem Drunken Walk[6]. 20 Suppose the we have the transition matrix P as mentioned in Chapter 1. First we shue the rows and columns so that all of the absorbing states are together. By doing so, we obtain a sort of canonical form forP , which is as follows: M = 2 6 4 I r 0 S Q 3 7 5 wherer is the number of absorbing states. Let s be the number of non-absorbing states. Suppose we have 9 states with 1 absorbing state, then s = 8, and the transition matrix P 0 after shuing rows and columns is: P = 0 B B B B B B B B B B B B B B B B B B B B B @ 1 0 0 0 0 0 0 0 0 0 1 3 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 1 6 0 0 0 0 0 1 3 0 1 2 2 3 0 0 0 0 0 0 1 3 0 1 C C C C C C C C C C C C C C C C C C C C C A so thatr = 1;s = 8, S= 0 B B B B B B B B B B B B B B B B B B @ 0 0 0 0 0 0 1 6 2 3 1 C C C C C C C C C C C C C C C C C C A Q= 0 B B B B B B B B B B B B B B B B B B @ 1 3 1 2 1 6 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 1 3 0 1 2 0 0 0 0 0 0 1 3 0 1 C C C C C C C C C C C C C C C C C C A The fundamental matrix T is : T =I r +Q+Q 2 +Q 3 +Q 4 + = (I s Q) 1 The probability of getting from state I to state j in k steps is Q k i;j . ThusT i;j is the probability of getting from statei to statej in 0 steps, plus the probability in 1 step, plus the probability in 2 steps, etc. This sum is the expected number of times we o from state i to statej. If we add the entries in thei th row ofT , we obtain the expected number of times that we are in a non-absorbing state: 21 T = (I 8 Q) 1 = 0 B B B B B B B B B B B B B B B B B B @ 2:45 1:89 2:01 1:96 1:92 1:82 1:60 1:10 1:10 2:19 1:94 1:97 1:92 1:82 1:60 1:10 0:49 0:98 2:21 1:91 1:93 1:82 1:60 1:10 0:22 0:44 0:98 2:19 1:87 1:83 1:59 1:10 0:10 0:19 0:43 0:96 2:14 1:77 1:61 1:10 0:04 0:08 0:19 0:41 0:92 2:04 1:55 1:11 0:02 0:03 0:07 0:17 0:37 0:82 1:82 1:05 0:01 0:01 0:02 0:06 0:12 0:27 0:61 1:35 1 C C C C C C C C C C C C C C C C C C A In terms of the problem, where the player starts from square 1 (row 1), this states that the numbers we will expect the player visit on square 1-8 are p = [2:45;1:89;2:01;1:96;1:92;1:82;1:60;1:10]: So the total times is 2:45+1:89+2:01+1:96+1:92+1:82+1:60+1:10 = 14:75, which matches the result we got in 2.1.1. If the cost vector is c = [1;1;2;2;2;3;3;4;0], then the expected cost ispc = 30:78. To nd the probability that the player ends up in square 9, we calculate the matrix TS; then (TS) i;j is the probability that the Markov Chain ends up in absorbing state j, assuming it starts in the non-absorbing state i. In this case if we only have one absorbing state 9, then TS = 1 If there's than 1 absorbing states, we can use the same method to get the result. Assume we have another absorbing state 5. When player is landed on square 5, then game is over. So the transition matrix P is P = 0 B B B B B B B B B B B B B B B B B B B B B @ 1 3 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 1 2 1 6 0 0 0 0 0 0 1 3 0 2 3 0 0 0 0 0 0 0 0 1 1 C C C C C C C C C C C C C C C C C C C C C A Thus P' after shuing rows and columns is: 22 P 0 = 0 B B B B B B B B B B B B B B B B B B B B B @ 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 2 1 6 1 3 0 0 0 0 1 6 1 3 0 1 2 0 0 0 0 1 2 0 1 3 0 0 1 6 0 0 0 0 1 2 1 6 0 1 3 0 0 0 0 1 3 0 0 0 0 0 1 2 1 6 1 6 0 0 0 0 0 1 3 0 1 2 2 3 0 0 0 0 0 0 1 3 0 1 C C C C C C C C C C C C C C C C C C C C C A Sor = 2;s = 7,S andQ are S= 0 B B B B B B B B B B B B B B B @ 0 0 0 1 6 0 1 2 0 0 0 1 3 1 6 0 2 3 0 1 C C C C C C C C C C C C C C C A Q= 0 B B B B B B B B B B B B B B B @ 0 1 2 1 6 1 3 0 0 0 1 3 0 1 2 0 0 0 0 0 1 3 0 0 1 6 0 0 1 2 1 6 0 1 3 0 0 0 0 0 0 0 0 1 2 1 6 0 0 0 0 1 3 0 1 2 0 0 0 0 0 1 3 0 1 C C C C C C C C C C C C C C C A The matrix T which indicates the number of times we are in a non-absorbing state is : T = (I 7 Q) 1 = 0 B B B B B B B B B B B B B B B @ 2:02 1:55 1:11 1:01 0:24 0:16 0:12 0:81 1:82 1:04 0:40 0:22 0:15 0:11 0:27 0:61 1:35 0:13 0:29 0:19 0:14 1:72 1:62 1:10 2:36 0:23 0:16 0:12 0 0 0 0 1:29 0:86 0:64 0 0 0 0 0:51 1:54 0:86 0 0 0 0 0:17 0:51 1:29 1 C C C C C C C C C C C C C C C A so the expected vector p is : p = [2:02;1:55;1:11;1:01;0:24;0:16;0:12]. Therefore the expected number of moves is 6:21, and estimated cost is 9:49. The two absorbing states indicates that when the game is compeleted, the player are on either square 5 or 9:The probabilities matrix that the player eventually landed on square 5 or 9 is: 23 Square 9 5 TS = 0 B B B B B B B B B B B B B B B @ 0:11 0:89 0:10 0:90 1:13 0:87 0:10 0:90 0:57 0:43 0:83 0:17 0:94 0:06 1 C C C C C C C C C C C C C C C A The probability matrix shows that if a player starts at square 1, when the game is completed, the probability that she landed on square 9 is only 11% comparing with the probability of landing on the 'black hole' 89%. So on this 3x3 board, because the game length is too short, and most of players will eventually land on the 'black hole', we decide that the 'black hole' is not adding more uncertainty and fun to the game. 2.1.5 Cost on the Edge In the previous section, the cost depends only on states of Markov Chain, meaning as long as we know the number of times of visit to a square, we know the cost when the player lands on that square. The Original Markov Chain is showed in below: Figure 2.10: Original Markov Chain wherec 1 ;c 2 ;:::;c 8 is the corresponding cost depending on states. Now if we have a new condition that the costs depend on the edges instead of states. For example, for square 4, the costs from 2, 3, 5 to square 4, arec3,a3,b4. So if we use the same method in 2.1.4 and get the expected numbers of visit p, we can't simply multiply it by the cost vector because the total number of visits is consisted of three partitions- visits starting from 1 move backwards, 2 moves backwards, and 1 move forward (gure 2-11). In order to solve this problem, we introduce a new Markov Chain, and use the method in 2.1.4 to get the result. Suppose we have 9 states as the previous section, let the original state 2 be divided into two new states 12 and 32. Where 32 is from state 3 moving backwards to state 2; 12 is from state 1 moving forwards to state 2. And break down the original state 3 into 13, 24 Figure 2.11: Original Markov Chain 23, 43. Do the same with other states. State 3;4;5;6;7 can be broken down into three new states, and state 1;2;8;9 can be broken down into two. Eventually we will have a new Markov Chain with 23 states. The states that has the same end square could be regarded as a group. Each groupS i has the probability p = 1=6 to move to the rst state of the S i+2 , the probability p = 1=2 to the second state of S i+1 , and the probability p = 1=3 to the second state of S i1 . For example, the probability of moving from states 24,34,54 to 46 is 1=6, to 45 is 1/3, to 43 is 1/3. See picture below. Figure 2.12: Revised Markov Chain Thus we have a revised Markov Chain whose costs is depending on states. The cost vector of visiting state 13,24,35,46,57,68,79 isc = [c 1 ;c 2 ;c 3 ;c 4 ;c 5 ;c 6 ;c 7 ]; costs of visiting state 12,23,34,45,56,67,78,89 is a = [a 1 ;a 2 ;a 3 ;a 4 ;a 5 ;a 6 ;a 7 ];costs of visiting state 11,21,32,43,54,65,76,87 isb = [b 1 ;b 2 ;b 3 ;b 4 ;b 5 ;b 6 ;b 7 ;b 8 ]. After shuing rows and columns, the new transition matrix M : M = 2 6 4 I r 0 S Q 3 7 5, and the matrixI 2 ,S, andQ is shown in below, where r = 2;s = 21. 25 Then the fundamental matrix T = (I 7 Q) 1 is shown below: Then starts from state 1,the expected moves at each states is T 1;i = [1:8153;0:6305;1:2229;0:6687;0:4076;0:9458;0:6527;0:3153;1:0031;0:6398;0:3344;0:9791;0:6060;0:3264;0:9598;0:5320;0:3199;0:9091;0:3670;0:3030;0:7980]. Expected moves is 14:7365. When the cost vector from state 1 to state 21 is C = [1;1;2;1;3;2;1;3;2;1;3;2;1;3;2;1;3;2;1;3;2], the total cost before landed on square 9 isS T1;i = 25:5673. Now we also know the probability matrix that player landed 9 from 7 and 8 is: Square 7-9 8-9 26 TS = 0 B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B @ 0:2660 0:7340 0:2660 0:7340 0:2660 0:7340 0:2660 0:7340 0:2661 0:7339 0:2661 0:7339 0:2661 0:7339 0:2656 0:7344 0:2656 0:7344 0:2656 0:7344 0:2679 0:7321 0:2679 0:7321 0:2679 0:7321 0:2577 0:7423 0:2577 0:7423 0:2577 0:7423 0:3031 0:6969 0:3031 0:6969 0:3031 0:6969 0:1010 0:8990 0:1010 0:8990 1 C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C A So the probability that player is eventually at state 7-9 and 8-9 starting from state 1 is 0:2660;0:7340, respectively. Assuming that the nal cost from 7-9 is 4, and from 8-9 is 3, then the total cost when game is completed is 25+40:26660+30:7340 = 28:2660. 2.1.6 Portal Figure 2.13: Game Board with the 7-4 Portal In this section we are discussing the eect of adding portals. A portal is a square on which player will be teleported to another square. For example, if we set square 7 to be a portal to square 4, it means whenever player lands on square 7, the portal will immediately send the player to square 4, i.e, we will have a new state 74 andp 7;4 = 1. The probability of moving to square 6,8,and 9 will be 0 (gure 2-13). Assuming we have the transition matrix P as in section 2.1.5, adding a portal 7 to 4 on the board brings a new state 74, where the probabilities of moving from 57, 67 or 87 to 74 is 1, while to 78, 79,76 is equal to 0. In fact, since we will never visit states 78, 79,76, they could be 27 deleted from our Markov Chain. But we keep them for now.Thus the Markov Chain after adding a portal becomes: Figure 2.14: Markov Chain with the 7-4 Portal So the matrix M after shuing rows and columns is: Figure 2.15: Matrix M Using the fundamental matrix we get the expected vector T 1;i : State 11 21 12 32 13 23 43 24 34 54 74 T 1;i 2.2936 1.5872 1.9404 2.8211 0.6468 2.3807 5.4358 0.7936 4.2317 4.1881 7.0940 State 35 45 65 46 56 76 57 67 87 68 78 T 1;i 1.4106 8.1537 3.0000 2.7179 6.2821 0 2.0940 4.5000 0.5000 1.5000 0 Table 2.3: Expected Visits to Each State From Square 1 28 So on average the player will be landing on the portal 7:0940 times. specically, the numbers of visits to square 7 from 5, 6 and 8 are 2:0940, 4:5000, 0:5000, respectively.The total average moves is: P 21 i=1 T 1;i T 1;11 = 63:57117:09404 = 56:4771 Assume the cost of portal is 0, i.e,c 74 = 0, and the cost vector c is c = [1;1;2;1;3;2;1;3;2;1;0;3;2;1;3;2;1;3;2;1;3;2]. the total cost is 102:291+3 = 105:291, and the cost at each state is shown below: Figure 2.16: Cost at Each State According to the result, adding a portal brings a new state, and removing three existing states related to the portal state. When the new portal is sending the player to a square backwards, the expected moves and costs increase. In this case after adding a portal from 7 to 4 the average move increases from 14.75 to 56.4771, and the average cost increases from 28.2660 to 105.291. The result indicates that since adding a portal could possibly change the game length dramatically, so when add a new portal, some parameters of the portal should be taken into consideration. For example, the position of the entrance and exit of the portal. Think about how much in uence the we would like the portal to have before import one to the game. A state that has more expected time of visits will eect the game length more than a state that has fewer visits once it's chosen to be a portal. The direction of the portal{sending a player forward or backward controls the diculty of the game by increasing or decreasing the length. The length of the portal, cost of the portal, and probability that the portal will be activated when player is landed on also can be modied to make our game closer to our expectation. 29 Chapter 3 Board Extension 3.1 New Board Imported In the previous chapter we analysis the 3x3 board using Markov Chain model and Monte-Carlo simulation. In this chapter we are applying the same method to a 6X6 extended board. Now suppose we have a 6X6 board with square 1 being the start point and square 36 the end point. Player roll the dice to move from 1 towards 36.All players start o the board.On each turn, players roll a 6-side dice that determines how many steps they will go, moving the corresponding number of square. The steps of moving is described as follows: 1. Roll 1,2: 1 step backward (moving towards 1); 2. Roll 3,4,5 : 1 step forward (moving towards 36); 3. Roll 6: 2 steps forward(moving towards 36). Each move has a cost which is determined by the square that player is currently standing on and the destination of this move. The cost c i>j is corresponding with the state i>j. Additional Condition: 1. Portal: landing on a portal immediately sends the player to a linked square. 2. Black hole: a black hole traps the player and end makes the player out of the game. 3. Yellow square: gives the player extra money. 4. Green square: take away a certain amount of money from the player. Now we have rules and procedures of the game. If we are not satised with the game play itself, some modications of the rule are necessary. But we need to try adjusting the parameters rst. 30 For most of the time, changing the number will change the whole experience. So suppose we already have the all the rules set, now the goal is to tune the game by changing some of the parameters. In our game, changes on the transition matrix will aect the expected moves and total cost, while changes on the cost matrix will only aect the total cost. The parameters that could have an impact on the transition matrix include the probabilities of moving 1,2,and -1 steps, numbers of non-absorbing states and absorbing states, number of portals, portal direction, portal length. Other parameters such as,yellow and green square, cost type (on state or on edge), basic cost of moving, cost of the portal all together have in uence on the total cost, and thus need to be taken in to consideration when we design the initial asset for the player. The parameters in Chapter 2 are listed as below: Parameter Name Value 1 Value 2 Board Size 3X3 - Square of Absorbing State 9 - Number of Black Hole 0 - p of moving 1 step forward 1/2 - p of moving 2 step forward 1/6 - p of moving 1 step backward 1/3 - Number of Portals 1 - Portal Entrance/Exit 7 4 Cost type On the edge - Number of Yellow/Green Square 0 0 - Table 3.1: List of Parameters in 3x3 Board In the next section we will step by step build the game on the extended board by adding/adjusting the parameters. 3.2 Analysis of the board Figure 3.1: The Extended Game Board First we will analyze the original 6x6 game board without portals, colored square or black hole. Suppose we have a 6x6 board with the starting point on square 1 and the ending point on square 36. The probability of moving from state i to another state is : p i;i+1 = 1=2, p i;i+2 = 1=6, p i;i1 = 1=3, p i;i = 0 for i6= 1 or 36, p 1;1 = 1=3, p 36;36 = 1. Then we form a Markov chain with states 1 to 36. This Markov Chain has one absorbing state Square 36.The states 1 to 35 are transient states from which it is possible to reach the absorbing state 36. The cost type is on the state. The dimension of the cost vector~ c 31 State Cost State Cost State Cost State Cost 1 1 10 2 19 4 28 5 2 1 11 2 20 4 29 5 3 1 12 2 21 4 30 5 4 1 13 3 22 4 31 6 5 1 14 3 23 4 32 6 6 1 15 3 24 4 33 6 7 2 16 3 25 5 34 6 8 2 17 3 26 5 35 6 9 2 18 3 27 5 36 0 Table 3.2: Cost on State is 36, and the costs on the states range from 1 to 6 as in Table 3.1. We did a little modication on state 35 so that moving 1 or 2 steps will land on square 36. The board and the transition matrix are shown in Figure 3.1.1. We have this absorbing Markov chain in which the probability that the process will be absorbed is 1, i.e., the matrix Q from the canonical form hasQ n ! 0 asn!1. Additionally, the matrixQ is regular and thus irreducible. In other words, for somen, it is possible to go from state i to statej in exactlyn steps. Figure 3.2: The Transition Matrix Result After run the program in Matlab, we have the expected game length using Markov chain is 68.734, while the result of simulation is 69.392 with a sample size of 1000. According to the simulation, among 1000 players, the maximum move is 142, minimum move is 32. 25% players nished the game in 57 moves, 50% players nished it in 80 67 moves and 75% players nished the game in 80 moves. The interquartile range is 23, standard deviation is 18.2914.Outliers are [142 140 135 131 130 128 126 125 123 121 121 120 120 119 119 119 118 116 116 115].Only 158 players out of 1000 nished the game in 50 moves.After 50 moves, the expected square we land on is 26. The furthest 32 we can reach is 36 and the lowest we landed on is 3.The expected visits on each square are [2.44 1.89 2.02 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 1.99 1.98 1.96 1.92 1.82 1.6 1.1], see Figure 3-5. The average costs from our math solution and simulation are 230.599 and 228.925, respectively.The rst, second, and third quartile are 112, 224.5 and 268.73. Interquartile range is 84.75. Outliers: [550 485 470 461 420 419 415 415 412 411 404 402 399 398]. The result showed that 75% players spent 550 game currency when the game is completed, which could be a reference when we design the initial asset for the player.The number of moves is lower than what we expected. So some adjustment to shorter the game is denitely necessary at this moment. Figure 3.3: Distribution of Game Length and Probability of Number of Moves When Game is Finished Figure 3.4: Boxplot of Moves and Cost When Game is Finished 3.3 Game Tuning Suppose we build our game based on the assumption that we stick on the same dice, meaning the probabilities of moving forward and backward stay the same. In order to add complexity but in 33 Figure 3.5: The Extended Visits on Each Square the meantime shorten the game, we will change other parameters such as the cost type, number and position of portals and black holes.First let's change the cost type from depending on state to depending on the edge, which means the cost now isc i!j instead ofc i . Because of this change, we have a new Markov chain with 104 states as follow: Figure 3.6: The Extended Game Board Then we set up some parameters of the portals. In order to shorter the game, we will add more upward portals than the downward portals.Suppose we have this 6X6 board with 3 upward portals: 818,1320,2428, the portal length is 10,7,4,respectively; 2 downward portals :1611,3226,portal length: 5,6; 2 bank squares which provide player with $20: square 5 and 30; 1 bankrupt square which takes away $20 from the player who landed on: square 22. Then the new board can be presented as follow: The results is that the expected number of moves is 154, and the average cost -87.1705, which 34 Figure 3.7: The Extended Game Board with 6 Portals shows that adding three upward portal and two downward portal actually prolong the game by 126%. Adding two yellow square with $20 bonus and one green square with $20 charge results in rewarding player with $87. So we next step we will create one more upward portal and modify the reward and charge of those colored square. Now let's add one more upward portal 410, ,change the reward of yellow squares from $20 to $5, and increase the punishment of green square from $20 to $30. After running the program, the result is that the expected number of moves is 86.6232, and the average cost 367.718.So adding a 6-length upward portal shorten the game length by 40%. We want the expected steps to be close to 50, and after adding these colored squares, the average cost is is still close to the cost before adding them. So we will add a 4-length upward portal 1519, shorten the portal 3226 to 3229, and adjust the cost of 22 from $30 to $20. This time the average moves is 59.9741, and the average cost =234.939, which is very close to what we want in Chapter 1. Now let's add a black hole on the board. Since the black hole will dramatically change the game length, rst we will try placing the black hole on a square that close to the end point. So we set 27 to be the black hole to see the result. So the board is now: Figure 3.8: The Extended Game Board with 8 portals and Black Hole 27 35 The result is that the average move is 29.0639, average cost =106.587. The probabilities of ending up in absorbing states when the game is completed is p = [0:0062;0:0093;0:36310:2071;0:4142], which is corresponding with states 2527, 2627, 2827, 3436, 3536. Obviously, the probability of ending up in black hole is too high that more than 36% players will be staying in black hole and have to quit the game. Since we want our black hole to have less impact on the board, we move the black hole to a position that player rarely visit. Now if we take a look at the number of visits on each square (Figure 3-9), we nd that square 17 has a really small number of visit: v 1517 = 0,v 1617 = 0,v 1617 = 0:2326. So this time we move the black hole from 27 to 17. Figure 3.9: The Extended Game Board with 8 portals and Black Hole 27 Suppose the cost matrix C is shown as below: Figure 3.10: Cost of States of the New Markov Chain 36 Now we have the result: average moves=51.5495, average cost =198.632, which satised our requirement. The probability of landing on the black hole when the game is nished is 15.95%, while the probability of ending up in square 36 from 34 is 28.02% and from 35 is 56.03%. The maximum number of visit is 8.76 on square 29 and minimum visit is 0 on square 14,15,16. The average visits to each non-absorbing state is the vector v=[2.25 1.5 1.12 0.89 0.24 0.16 0.12 0.27 0.50 1.51 1.11 0.80 0.72 0 0 0 0.47 0.60 1.10 0.73 0.23 0.15 0.17 0.29 0.84 1.96 4.54 8.76 6.75 4.83 4.24 2.10 1.68 0.84]. Square 29 costs the most among all the square with an average of 43.8187. On average, players spent 11.35 to land on square 29 moving from 28, 11.25 from 30, and 21.21 from 32. A colored board shows the times of visit to each square in Figure 3-11. Figure 3.11: Colored Board Showing the Times of Visits to Each State So after using Markov chain model and Monte-Carlo simulation, we have the board and the parameters of our nal version as shown in Figure 3-12 and Table Of course if we want to add more complexity without changing the basic rules, we can add more colored squares and portals, or changing the parameters of colored square, portals, black holes by applying additional conditions to them. For example, instead of transporting to the linked square 37 Parameter Name Value 1 Value 2 Board Size 6X6 - Square of Absorbing State 17 36 Number of Black Hole 1 - p of moving 1 step forward 1/2 - p of moving 2 step forward 1/6 - p of moving 1 step backward 1/3 - Number of Portals 8 - Portal1 Entrance/Exit 8 18 Portal1 Entrance/Exit 13 20 Portal1 Entrance/Exit 24 28 Portal1 Entrance/Exit 4 10 Portal1 Entrance/Exit 15 19 Portal1 Entrance/Exit 16 11 Portal1 Entrance/Exit 32 29 Cost type On the edge - Number of Yellow/Green Square 2 1 Yellow Square Bonus $5 - Green Square Charge $20 - Yellow Square Location 5 30 Green Square Location 22 - Table 3.3: List of Parameters in Final Board when player lands on the portal, we can add a dice to determine if player will be teleported or not. Similarly, we can use another dice to decide how much the player should pay for it when she lands on the green square. Or we could make the black hole escapable if the player rolls a number 6. Then we can adjust the parameters in the programming and in the transition matrix to get the results. 38 Chapter 4 Summary and Future Work The Markov chain model and Monte-Carlo method are important application in game design. Today it is widely used in design chance-based board game and casino game. In this paper we provided an application of these method in a board game design and tuning. Chapter 1 is an introduction to the simplest version of our Markov chain game - a 3x3 chance-based board game. We set our design game and explain the iteration process. In Chapter 1 we also introduce the Markov chain model and Monte-Carlo method that we will use to analyze and tune the game Chapter 2 provides a thorough analysis of the game include the probability from state i to statej aftern moves, expected square and cost after a certain number of moves. Specically we discuss the math solution when the matrix is singular and non-singular. We also use the fundamental matrix to calculate the expected number of visits on each state, the probability matrix of landing on absorbing states. We explored the possibility of adding a black hole and found out that a black hole has a huge impact on the game length when put into a small board such as the 3x3 board. Since the result shows that 89% of players will end up staying in the black hole, we decide to apply the black hole to a bigger board, and the location should be the square that has less visits than other square. In order to analyze the number of visits from statei to statej and thus calculate the average costs on the edge, we introduce a method to transform the 9-state Markov Chain into a 23-state new Markov chain. In this way we turned the problem into the previous one whose cost is on the state. We also explore the impact of portals. The 3-length backward portal will prolong the game from an average move of 14 to 56, which shows a huge in uence on the game. In chapter 3 we explored the promising ideas of using the methods mentioned in chapter 3 on an extended 6x6 game board. In this chapter we analyzed the original board, and step by step built the game by modifying one parameter at a time after analyzing the result from the previous change. The average moves when game is nished, times of visit to each square and the average 39 cost in total is the index we focused on. After several iterations including changing the cost type, adding portals, adjusting portal length, placing the black hole, re-placing the black hole, we get our nal game prototype. This thesis has concentrated on applying Markov chains and Monte-Carlo method in a simple chance-based board game design. However in the reality the casino games and board games could be more complicating, for the casino games have more possible overcomes and include betting strategies, and board games such as Monopoly always involve multiple players and player decisions. The Martingale model of betting strategy[8] is an area I hope to spend more time on in my future research. The rule of losing and getting money could be based on player's betting instead of the xed cost based on states. Another research direction can be game theory[10], specically the Hidden Markov Model[11] in which the states are partially observable. Exploring the use of hidden Markov model in a strategic game[12] is another possible research direction for my future work. 40 Bibliography [1] Nick Berry(2011): Analysis of Chutes and Ladders, http://datagenetics.com/blog/november12011/index.html [2] S. C. Althoen, L. King, K. Schilling (1993): The Mathematical Gazette, Vol. 77, No. 478, pp. 71-76, http://www.jstor.org/stable/3619261 [3] Metropolis, N.; Ulam, S. (1949): The Monte Carlo Method, Journal of the American Statistical Association (American Statistical Association) 44 (247): 335{341. doi:10.2307/2280232. JSTOR 2280232. PMID 18139350 [4] Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.; Teller, Edward (1953-06-01): Equation of State Calculations by Fast Computing Machines, The Journal of Chemical Physics 21 (6): 1087{1092. Bibcode:1953JChPh..21.1087M. doi:10.1063/1.1699114. ISSN 0021-9606. [5] Monte Carlo Simulation:Palisade.com, http://www.palisade.com/risk/ [6] Charles M. Grinstead,J. Laurie Snell (1997): Introduction to Probability, American Mathematical Society,pp.405-425.ISBN-13: 978-0821807491. [7] Georey R. Grimmett, David R. Stirzaker (2001): Probability and Random Processes,pp.231-223, Oxford University Press,ISBN: 0198-572239 [8] David Williams (1991), Probability with Martingales, Cambridge University Press, Cambridge. [9] Wikipedia contributors: Monte Carlo method. [10] Robert Gibbons (1992): A primer in game theory,Harvester Wheatsheaf. [11] E. Waghabir (2009): Applying HMM in Mixed Strategy Game,Master's thesis, COPPE-Sistemas. [12] Baris Tan(1997): Markov Chains and the RISK Board Game, Mathematics Magazine 70, pp.349-357 , 349357. 41
Abstract (if available)
Abstract
In this thesis, we explored the use of Markov chain model and Monte-Carlo simulation to analyze and tune the prototype of a chance-based board game. The ultimate design goals of game length and game play complexity are set. In order to achieve the goals we also introduce a method to transform the original Markov chain into a new chain to analyze game process and results. Based on the results, we tuned the prototype by adding new conditions and tweaking some critical parameters. Some minor modifications in the rules are necessary as well.In the end, we finalize our model and verified the results.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Limiting entry/return times distribution for Ergodic Markov chains in a unit interval
PDF
Noise benefits in Markov chain Monte Carlo computation
PDF
Geometric bounds for Markov Chain and brief applications in Monte Carlo methods
PDF
Mixing times for the commuting chain
PDF
Construction of orthogonal functions in Hilbert space
PDF
Finding technical trading rules in high-frequency data by using genetic programming
PDF
Neural matrix factorization model combing auxiliary information for movie recommender system
PDF
Stochastic Variational Inference as a solution to the intractability problem of Bayesian inference
PDF
Interval arithmetic and an application in finance
PDF
Applications of contact geometry to first-order PDEs on manifolds
PDF
Return time distributions of n-cylinders and infinitely long strings
PDF
Recurrent neural networks with tunable activation functions to solve Sylvester equation
PDF
Applications of Stein's method on statistics of random graphs
PDF
Homologies, chain complexes, and the shape of data
PDF
Research on power load time series forecasting method based on transformer model
PDF
Applications of Markov‐switching models in economics
PDF
A nonlinear pharmacokinetic model used in calibrating a transdermal alcohol transport concentration biosensor data analysis software
PDF
Improvement of binomial trees model and Black-Scholes model in option pricing
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
On stochastic integro-differential equations
Asset Metadata
Creator
Liu, Haizhou
(author)
Core Title
An application of Markov chain model in board game revised
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Applied Mathematics
Publication Date
07/21/2016
Defense Date
07/20/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
board game,casino game,Drunken Walk,Markov chain,Monte-Carlo,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lototsky, Sergey (
committee chair
), Tiruviluamala, Neelesh (
committee member
), Zyda, Mike (
committee member
)
Creator Email
80bunny@gmail.com,haizhoul@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-270692
Unique identifier
UC11280448
Identifier
etd-LiuHaizhou-4561.pdf (filename),usctheses-c40-270692 (legacy record id)
Legacy Identifier
etd-LiuHaizhou-4561.pdf
Dmrecord
270692
Document Type
Thesis
Format
application/pdf (imt)
Rights
Liu, Haizhou
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
board game
casino game
Drunken Walk
Markov chain
Monte-Carlo