Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Cause and Affect: Intentionality as First Mover?

Doug Blank's picture

I hope in this discussion to explore the notion of "intentionality" and how we, as complex, evolving systems, can make sense of it.

Vignette #0: What is intentionality?

  1. What is intentionality? (You can substitute "agency", "story telling", "counterfactualizing" etc. if you wish)
  2. How can you test if you have it?
  3. What causes it? What can it cause?
  4. What things have it?
  5. Can it be rational?
  6. Is intentionality consistent? Random?
  7. Can intentional choices be ethical/moral?
  8. Can the ends justify the means of an intentional choice?
  9. Can an intention be a "first mover" (instigate action)?
  10. Is this a new thing in the universe?

Vignette #1: Tool-Making Bird

 

Shaping of Hooks in New Caledonian Crows
A. A. S. Weir, J. Chappell, A. Kacelnik
Science 297, 981 (2002)

  1. Is this an intentional act?
  2. Is the bird an intentional agent?
  3. What caused the bird to act this way?
  4. What was going through the bird's mind?

Vignette #2: It's a Dog Save Dog World

http://www.youtube.com/watch?v=ofpYRITtLSg

  1. Was this an intentional act?
  2. Is the dog an intentional agent?
  3. What caused the dog to act this way?
  4. What was going through the dog's mind?

Vignette #3: Testing a Robot's Memory and Lack Thereof

In order to test a memory model we were developing, we decided to compare our model in the best possible light: compare it to a model that had no memory in an environment that required memory. Only things went astray.

  1. Turn on a light on left/right side of hallway
  2. Turn off light
  3. Have the robot go down the long hallway
  4. Recall what side of the hallway it saw the light
  5. Turn that direction at end of hallway

This requires memory by definition, but the robot without memory evolved to solve the task! (Reveal the unexpected evolutionary "trick" here.)

Vignette #4: Horizon Effect in Artificial Intelligence

"When evaluating a large game tree, search depth is often limited for feasibility reasons. Sometimes evaluating a partial tree may give a misleading result. When a significant change exists just over the 'horizon', slightly beyond the depth of search, one falls victim to the horizon effect."

"The horizon effect can be mitigated by extending the search algorithm with a quiescence search. This gives the search algorithm ability to look beyond its horizon for a certain class of moves of major importance to the game state, such as captures."

http://en.wikipedia.org/wiki/Horizon_effect

In reality, there is never a quiescent state. The system continues to change off into infinity. Therefore, a rational move must be made considering a finite counterfactual tree, or admit that a rational move is impossible.

Vignette #5: A Flock of Birds

  1. Was this an intentional act?
  2. Is the flock an intentional agent?
  3. What caused the flock to act this way?
  4. What was going through the flock's mind?

Vignette #6: What's the best strategy in the Prisoner's Dilemma?

"Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects from the other) for the prosecution against the other and the other remains silent (cooperates with the other), the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?"

http://en.wikipedia.org/wiki/Prisoner%27s_dilemma

Well, it depends. If you do it just once, your strategy might be different if you must do this everyday. In the Iterated Prisoner's Dilemma in the Evolution of Cooperation (1984), Axelrod found that the Tit-for-Tat strategy was a solid, generally robust winner.

Vignette #7: Moral Dilemmas

From http://www.wjh.harvard.edu/~jgreene/

First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."

Then we have the footbirdge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."

These two cases create a puzzle for moral philosophers: What makes it okay to sacrifice one person to save five others in the switch case but not in the foorbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's okay to turn the trolley but not okay to push the man off the footbridge?

From From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?

Over the last four decades, it has become clear that natural selection can favour altruistic instincts under the right conditions, and many believe that this is how human altruism came to be. If that is right, then our altruistic instincts will reflect the environment in which they evolved rather than our present environ- ment. With this in mind, consider that our ancestors did not evolve in an environment in which total strangers on opposite sides of the world could save each others’ lives by making relatively modest material sacrifices. Consider also that our ancestors did evolve in an environment in which individuals standing face-to-face could save each others’ lives, sometimes only through considerable personal sacrifice. Given all of this, it makes sense that we would have evolved altruistic instincts that direct us to help others in dire need, but mostly when the ones in need are presented in an ‘up-close-and-personal’ way.

These moral dilemmas are battles fought in our brains. The decision is made by the winning side.

Vignette #8: Limitations of Self-awareness

  1. Doctor sticks an electronic probe into a man's brain and activates a neuron
  2. Man begins whistling Dixie
  3. Doctor: "Why are you whistling?"
  4. Man: "Oh, I've had this song stuck in my head for weeks. I just can't get it out of my head."

Vignette #9: The Right Thing To Do

My first job was a manager of a restaurant. I was 19 years old. The first time that I "closed" (cleaned up, took inventory, balanced the books) we had a cheeseburger left over. I felt it would be just a waste to throw it away, so I offered it to the person who had been working the grill, and cleaned "the line" as a reward for a job well done. Two birds with one stone.

The next night, I also closed, with the same people. That night, we had a double-cheeseburger, a chocolate shake, and a brownie ala mode left over. Cause and effect.