BIOLOGY 103 |

This lab is intended to give you some experiences with aspects of evolution. It consists of three simulations, all three of which you should explore at least briefly. You should choose one for more extensive exploration and post a lab report on that one.

The Game of Life, at http://serendipstudio.org/complexity/life.html

- How many different stable life forms can evolve from random patterns?
- How sensitive is evolution to the particular random starting pattern?
- By making sufficient observtations, can you work out a set of rules that allow you to predict what life forms will evolve from a given starting pattern?

The Prisoner's Dilemna, at http://serendipstudio.org/playground/pd.html

(Pseudo)-Altruism, at http://www.brynmawr.edu/Acads/Biology/Bio101/prot/pseudoaltruism.html - thanks to Ted Wong - REQUIRES Internet Explorer instead of Netscape

Some additional models to play with (also require Internet Explorer):

- Altruism, at http://ccl.northwestern.edu/netlogo/models/Altruism
- Cooperation, at http://ccl.northwestern.edu/netlogo/models/Cooperation

The task of having a proportionate number of red (alturistic) and blue(selfish) is an almost impossible task to achive. The blue ants prey on the red ant's capacity to share. Even when starting with one blue ant, and all of the set levels of trust and alt probability and feidelity set in the preset modes, the blue ants rate of growth is ipressive. It was difficult to keep one population living harmoniously with the other, yet the settings that i found in which both populations remaned stable for an extended period of time was when the initial anti trust was at 47%, and the aulturistic probalibity was set on .1, and the fidelity was at .4, and we started with one blue ant. Over the perdiod the blue ants remained stable, and did not overwhelm the population suddenly, they did so gradually.

It is important to note that we did have populations where the red ants did dominate, yet only under circumstances in which the blue ants were in a highly adverse situation, (extremly low trust and low aulturistic probability).

Brenda Zera and Elizabeth Damore

First, we ran a control run from which to base our hypothesis about ant survival.

Cooperative ants (red): 50

Selfish ants (blue): 1

Probability of altruism: 1

Fidelity: 0

The two populations crossed at 50 time units. The red ant population crashed, while the blue went up to 967 members and reached an equilibrium.

From this we hypothesized that if we increased the initial number of red ants that it would take longer for the blue ant population to overtake them.

Red ants: 100 individuals

blue ants: 1

probabilty: 1

fidelity: 0

The results were the same as the initial experiment, but this time the maximum ant populations were 1019. (first the red, then the blue) Our hypothesis was proved incorrect!

We then messed around with the other controls, to see what the effect would be on the population (we wanted to try and keep the red ants alive).

All experiments were conducted with a 100 red: 1 blue ant ratio

First, reduced the probability of altruism from 1 to 0.

The red ants grew to a population of 988. The blue ants had a small population at first, but died out completely by 300 time units.

Next, we changed the probability to .5 from 1.

The red increased greatly to 1029, then began to decrease rapidly. The blue ants increased and crossed the red ant population at 100 time units. The red ants flat-lined at 130 time units. The blue stabilized at 1029 individuals.

Then, we changed the fidelity to 1 from 0, and put the probabilty back to 1 as well.

The red ants increased to 1018 and stay there. The blue rises slowly, but doesnt get very high. They decrease and eventaully die at 170 time units.

We then changed the fidelity to .5 and kept the probability at 1.

The red went up to 955 individuals. They start to decrease (blue rises) at 100 time units. The two cross at 160 time units. Red ants are gone by 254 time units. The blue ants stabilize at 955 individuals.

Next, we tried the fidelity at 1 and the probabilty at 0.

The red ants increased to 962. The blue ants stay low and die before 100 time units.

This time, we put the fidelity at 0 and the probabilty at .1 units

The red increased to 984; around 250 time units, they dropped. The blue began to rise. Reds decrease some more at 319. Blues rising fast at 400 time units. The blue and red ant populations cross at 502 time units. They stay together (coexisting for a short while) before the red drops and the blue continues to rise (slowly).

Last test, we put both the probability and fidelity at .5 units

The red ant population increased to 987 ants. It stays there until 160 time units, when the population starts to go down. The blue ants stay low until 160 when they start to rise. The populations cross at 210 time units. The red ants are all dead by 400 time units and the blue have levelled off around 990.

When the altruism probabilty was lowered, the red ants were more likely to survive, as well as when the fidelity was increased (this allows altruistic ants to gain energy from each other).

GO ANTS!

We chose to focus on the "prisoner's dilemma" which involved us playing against Serendip. We found that the best strategy for us to get more coins and keep a good average was to cooperate until the end, and at the last minute cheat. this way, serendip could not predict our cheating, which would only give the both of us 1 coin at a time. the only problem that arises is the number of rounds is not constant. this means that you have to take a risk if you really want to profit.

Sarah Tan

Yarimee Gutierrez

M.R.

We played with the second game, with the gold coins and the pirates.

We suveyed the different types of strategy involved with the game. Below is an outline:

A. You can cheat the whole time.

You will win only by one coin

B. You can cooperate the whole time.

You will tie

C. You can alternate clicking cooperate and cheating

You will tie

D. You can alternate cheating and cooperating

You will tie

E. You can cooperate the whole time, then anticipate when the "Wizard" will end the game and then jump ahead and cheat just before the game ends. This is risky, and you still only win by five points.

F. You can choose to cheat first since you will automatically benefit by either one coin or five.

After the first time cheating, you can either chose to tie (by going back to cooperating) or only increase by one for the rest of the game (by cheating). If you cheat for the rest of the game and only increase by one, the computer dubs you as "Flirting With An Inconcievably Foul Fate" and therefore not be a very nice person. However, you never get ahead by being nice. (You never fall behind either).

Ultimatley, you can only win by 5 points and the computer will tell you that you can do better. But you really can't.

In each trial of the game, the computer starts with cooperation and will mirror your turn every turn thereafter. Consequenly, we, the autonomous party, will always WIN! We did two trials each of all A)of the turns the same B) the turns alternating every other turn and C) alternating every two turns. Each set of two trials started one with cooperation and one with competition. We recorded the averages instead of the ending number of coins because we felt this most accurately represented the ongoing interactions between systems or individual in life ( which is also ongoing). The point, presumably, of this excercise was to find the most lucrative pattern and method of survival for both parties. While the trials show that the strategy of full cooperation was most beneficial for both, we are hesitant to think this finding applies to a larger world context. The game does not account for any randomness on the part of the computer. If this autonomy were added to the simmulation the results would presumably be different.

Stephanie Lane, Kate Amlin, Katie Campbell

For this lab we came up with the following hypothesis:

(As postulated with The Prisoner's Dilemma:) "The best strategy for a given player is often one that increases the payoff to one's partner as well."

First, we explored The Prisoner's Dilemma by playing the game with four different strategies:

Strategy #1: We competed each time.

The outcome: We had 14 coins, serendip had 9.

Strategy #2: We cooperated each time.

The outcome: We had 36 coins, serendip had 36.

Strategy #3: We cheated the first time and then cooperated for the remaining times.

The outcome: We had the same number of coins that serendip did.

Strategy #4: We cooperated until the last time when we cheated.

The outcome: We had 41 coins, serendip had 36.

This leads us to believe that pure cooperation is not MORE beneficial than our other strategies.

However, since pure cooperation, in this instance, is defined as your ability to increase your oponents ability and not just your own, we began to question the nature of altruistic action.

This led us to the exploration of the (Pseudo) - Altruism Game.

We started out with equal numbers of selfish and altruistic ants, an .5 altuistic probability, and .5 fidelity probability.

The altruistic ants were extinct around 128 time-intervals.

When we kept the alruistic and fidelity probabilities at .5, increased the altruistic population to 100, and decreased the selfish population to 10, the altruistic ants became a minority at 90 time-intervals and were extinct at around 319 time-intervals.

When we increased the fidelity level to 1, kept the probability level at .5, and set selfisth and altruistic ant levels at 50...

the altruistic ants maintained a population comparable to the selfish ants.

Therefore, we disproved our hypothesis. We conclude that it is impossible for life to exist, and therefore evolve, with purely altruistic actions.

The Prisoner's Dilemma showed us that alruistic actions can be beneficial.

However, our experiences with the (Pseudo) - Altruism experiment prove that pure altruism will inevitably lead to extinction. Since life cannot be substained without some selfish acts, it is doubtful that any action in life is void of selfishness.

The Prisoner's Dilemna

Alternate cheating/cooperating you can earn a maximum of 5 pts. in two rounds if Serendipity does the opposite of your move.

Cooperate each time, earn maximum of 6 pts. per 2 rounds and then cheat the last round to earn 5 pts. in the end. If both start to cheat you reduce pts. to only one per round.

This only works because you are able to predict the computer's move. Also it is necessary to know when the last round is which is a chance because it is whenever the wizard is tired of playing.

To cheat or not to cheat, that is the question. Whether 'tis nobler to screw the computer over...

Ok. In the prisioners dilemea we must look at the probability of profit. Serendip will mirror your actions, but always starts off cooperating. If you both cooperate, you will each gain 3 coins. If you both cheat, you will each gain one coin. If one of you cheats and the other does not, the cheater gets 5 coins. Simple, right? Here are the possibilites (excluding random cooperation/competition):

1. It seems as if cheating is the most profitable way, but if you cheat you will gain 5 coins. Serendip will then mirror your actions. You will then only gain 1 coin each time unless you decide to cooperate while serendip cheats and come off even in the end.

5 coins

6 coins

7 coins

2. Then maybe cooperation is best. If you do this, you will continue to get 3 coins each and stay tied throughout the entire game. Your overall coinge will be higher than the previous example, but the hightest?

3 coins

6 coins

9 coins

3. Wouldn't it be better to cooperate up until the last moment, when you cheat serendip? This way you would have gotten many more coins than if you had cheated throughout and an extra 2 coins at the end when you cheat.

3 coins

6 coins

11 coins

Which is the best strategy? Is there a best strategy? In the game it seems that you should cooperate until just before the end. The problem with this is that we do not know when the end of the game is. We are not told until some limiting number is reached that we have reached the end of the game. Thus, this theory is not as good as it seemed initially. In real life it seems as if it might be the most profitable overall. However, we can not say that this is the best.

We decided to explore The Game of Life.

No matter how much life you start with, whether a small or large amount, the area will be populated evenly, then eventually kill itself out through overcrowding, and then will finally separate itself into separate, sustainable colonies of either three in a line or four in a square or diamond formation. For example, if every spot is populated, every spot dies out in only one time step. Also, if the player begins with isolated squares of four, no changes occur throughout the timesteps, because the colonies are already stable.

There needs to be a certain amount of red squares to supply the green squares with what they need to sustain life. When the green squares compete, only the fittest survive. The problem is that there are a limited number of resources (red squares) to go around no matter how many green squares there are. This gives the smaller, more isolated "colonies" a better shot at survival and sustainability.

The best conditions for life exist in the creation of small, isolated colonies in which there is minimal competition for the limited resources. In fact, this formation, which reconciles the competition versus cooperation issue, is the only viable option for continued sustainability.

When trying to plan your ant farm, and you want nice ants, you should take into consideration the following data: (Remember-red ants are altruistic, blue ants are selfish)

If you begin with 50/50 selfish/altruistic

Altruism Probability (AP) = 1

Fidelity (F) =1

Results: Red took over after 400 units of time.

We repeated this test, and the blue held out for 500 units of time until they all died.

Next...

Population: 50/50

AP=1

F=.5

Results: Blue took over in 75 units of time

Population: 50/50

AP=.5

F=1

Results: Equal for a long time, then blue died off slowly but surely.

Population: 50 selfish/25 altruistic

AP=.5

F=1

Results: Both were steady, with blue higher than red until becoming even, finally red took over and blue died out.

Population: 50/50

AP=0

F=1

Results: Both red and blue rose quickly, but red died out slowly once carrying capacity was reached.

Our first test was successful (selfish died) because all of the reds were cooperating with eachother, while blues could not depend on eachother or anyone else. Whereas reds were able to support eachother and thrive!

In our second test, red ants would get fitness 3/4 of the time, and the blue ants were getting fitness 1/4 of the time, but the blue ants had an advantage because they never had to give up anything, whereas the red ants were supporting both blue and red ants.

Next, this was the same as the first experiment, but it took longer because altruism did not occur as frequently so the red ants were not cooperating as frequently, so they weren't benefitting as frequently.

In our fourth test, we started with fewer reds, so the process was much longer. Each state was prolonged because there were fewer ants to begin with.

Our final test was confusing, because the probabilty of altruism is zero, so noone is giving up anything or sharing, so there should be no advantage and no disadvantage. We expected red and blue to reach steady levels.

Our conclusion is that the only time red ants will win is if Fidelity is 1. The altruism probability just affects the amount of time it will take. Time is also affected by the population.

CR

Catherine Rhy

Trials and Tribulations of Altruistic Ants

Trial 1:

Initial-selfish – 1

Initial-altruist – 50

Altruism-probability – 1

Fidelity – 0

Outcome: All blue (Time: 126)

Trial 2:

Initial-selfish – 1

Initial-altruist – 50

Altruism-probability – 1

Fidelity – 0

Outcome: All blue (Time: 202)

Trial 3:

Initial-selfish – 1

Initial-altruist – 50

Altruism-probability – 0

Fidelity – 0

Outcome: Some blue, some red

Trial 4:

Initial-selfish – 1

Initial-altruist – 100

Altruism-probability – 1

Fidelity – 0

Outcome: All blue (Time: 100)

Trial 5:

Initial-selfish – 1

Initial-altruist – 50

Altruism-probability – 1

Fidelity – 1

Outcome: All red (Time:158)

Trial 6:

Initial-selfish – 1

Initial-altruist – 50

Altruism-probability – 1

Fidelity – .5

Outcome: All blue (Time: 200)

Trial 7:

Initial-selfish – 1

Initial-altruist – 50

Altruism-probability – 1

Fidelity – 0

SPATIAL STRUCTURE ON

Outcome: All blue (Time: 100)

All of this data suggests that pseudo-altruism (that is, "altruism" with 100% fidelity) may be the most conducive to domination by the pseudo-altruistic population.

Laura Bang and Adrienne Wardy

We looked at the prisoners' dilemma game. Serendip uses a very simple (and predictable) strategy called "Tit for Tat". Tit for Tat means that Serendip's default first move is to cooperate, and for every move after that Serendip copies the previous move of its opponent. This strategy is very easy to beat because all you have to do is start out cooperating every time, and then at some point to cheat, and from that point to keep on cheating. This leads to a win of 5 points higher than Serendip. The difficult part is anticipating when the "wizard" will decide the game is over, because you have to make your cheating move before that happens, or you and Serendip will tie. The longer you keep cooperating the more coins you will get overall, but no matter what you can only win by 5.

The strategy of "Tit for Tat" is encouraging toward cooperation: its first move is to cooperate, then if you cheat it will cheat too, but once you cooperate again "Tit for Tat" forgives you and cooperates again. However, in order to win, you have to cheat.

This applies to evolution because if we cooperate within and between species, we get more as a whole; but for the most part we act in our own self-interest (trying to win, even though it is only a small win), which leads to survival of the fittest.

Mer

Chels

We decided to play Pirate's Dilemma. First, we tried cooperating each round until the game was over. This resulted in a goodly amount of booty for both players. Then, we decided that cheating is cool, so we cheated until the game was over. Trusting serendip cooperated the first round, giving us 5 more coins-ha! Serendips, unfortunately learns quickly and cheated as well for the rest of the game. We then tried to cheat and then cooperate. Initially the computer cooperated and we cheated, then it cheated and we cooperated and from then on, we both cooperated. We've determined that the computer basis its decisions on what we did in the previous round. Finally, we determined the best way to beat serendip AND have a decent average is to cooperated for approx. 10 rounds, then stab the in the back- bloody landlubbers!!

We then focused on how many rounds we can play all cheating or all cooperating. We found a negligable difference of 1-2 rounds longer in games where we only cooperated. The average number of rounds when only cheating is 10.5, when only cooperating approx. 13 (one time we had 23, go us!).

Given these results, we determined that stabbing the computer in the back after the tenth round was most profitable. We also noted that if we were wrong and the game didn't end, we should continue cheating because, once betrayed, serendip doesn't trust us anymore.

In conclusion, although continually cooperating yields a better average and more coins on average, to beat serendip you must be a blue-blooded, bearded, bastard and cheat at the last minute- harharhar.

| Biology 103
| Course Forum Area | Biology | Serendip Home |

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:53 CDT