Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Reply to comment

Tim Burke's picture

Intention as Beaver Dam or Blindspot

So, thinking about this sketch of things and wanting to move on a bit beyond towards some of the tweaking and poking and re-labelling that it often provokes when Paul lays it out (though this is also still a very useful activity).

Here's the interesting problem about the main class of story-tellers that we know about, namely, ourselves. Near the end, I was thinking that it may be that our minds, a product of evolution, have difficulty comprehending emergent and/or evolving processes in conscious or self-reflective terms, which is where our sense of experiencing agency or intention resides.

These are issues Paul has raised before with this group: human minds may govern human action in ways that are not conscious, and consciousness may be mostly a posthoc or emergent sensation.

But if we have agency or intention, we tend to think that's a product of consciousness or sentience. Our nonconscious minds may be perfectly able to operate effectively within a world full of emergent or complex systems; our conscious minds contemplating intentional action may not be very good at doing so. When we contemplate acting intentionally, we tend to construct the relationship between our planned actions and intended outcomes in a fairly linear fashion.

Social science, environmental science, etc. in particular mostly relies upon this: isolating a single variable or reducing a problem initially for the sake of comprehending a whole, then often forgetting that such a reduction or modelling has been performed while advocating an action intended to produce a new systemic outcome based on an alteration of the single variable. What then usually happens instead is something unexpected or unpredicted.

So here's a real problem that follows on this. Using Paul's terms, the intentionality of story-tellers unmistakeably produces contingent outcomes that affect both model-builders and the active inanimate. The feedback arrows he builds into his model seem absolutely right to me, and they are meaningfully contingent rather than deterministic (however much they may be the outcomes of a process which is deterministic at larger scales). 

However, we could argue that those outcomes produced by intentional action are never what the conscious intent of story-tellers was aiming for because our minds don't really understand how complex causality really works. A purely literary example: this is kind of what some varieties of postmodern theory are arguing about the relationship between the author and audience of a text--that the author may intend in creating text to have a particular effect on an audience, but that the actual consequences of the text being produced, circulating and read in the world across time and space produces many outcomes that the author did not and could not anticipate or intend.

We could go on and on and on with examples of declared intentions of agents that end up mismatched to apparent systematic consequences, before we even open the Pandora's box of whether intentions are just post-hoc stories we tell about non-conscious actions. Maybe the kind of agents that we are, with the brains that we have, makes incapable of ever really matching deliberate action with intended outcomes, that this is a structural blindspot in human cognition. (Makes you wonder about the arrow leading onward from story-tellers in Paul's diagram: is there a kind of information-storing or manipulating entity which someday plausibly could have agency which is compatible with and cognizant of emergent/evolving systems?)


Or. Here's the other possibility. Is the intentionality of story-tellers more like a beaver's teeth? Meaning, do story-tellers progressively alter their environments so that those environments conform more and more to how story-tellers relate action and consequence? A beaver moves objects around in the environment in such a way that the environment changes to favor the beaver's reproduction, using a physiognomy which is adapted to making those changes. Maybe humans are engaged in a cumulatively more successful effort to remake the evolving systems which produced the active inanimate and then model-builders so that those systems are really or ontologically more and more responsive to linear understandings of agency. This is pretty much what high modernist science, policy-making and art imagined it was accomplishing up through the 1960s, though the language used to describe this goal was more about mastery, control, etc. over the universe. If you take this view, unexpected or unintended outcomes following on intended action are a sign of incomplete work rather than of the mismatch between story-teller intentionality and the "real" world of emergence and complexity. They're the leaks in a beaver's dam, yet to be plugged.

In this view, the universe is being steadily cut to fit the bed of the evolved mental preferences of story-tellers, to be an entity about which stories can be more readily and effectively told (though so far on an intensely local scale).

There are all sorts of possible meeting grounds between these two paradigms, of course. You could look at the second idea as a story that story-tellers like to tell to themselves, but a story which nevertheless has pretty powerful real influences on their actions in the world. Or you could argue that there are some kinds of actions where intentions and consequences can be forced to align through simplification or optimization of a system or structure and others where complexity is inevitable.



The content of this field is kept private and will not be shown publicly.
To prevent automated spam submissions leave this field empty.
2 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.