Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.

Topic 8: Thoughts on lateral inhibition, perception and other things. . .


Barth

The definition of "reality" as "that which accounts best for all known experiences" is one that, I think, overturns the classical definition of reality itself...that is, a reality existing independent of our perceptions. But as I don't believe in any sort of direct-access realism (and such a belief would be very difficult for anyone familiar with phenomena like phi motion and the blind spot) there's no way to draw a satisfactory conclusion that I can think of.

Why is it so common to think of perception as a passive process in the first place? Could this be an example of an overreaction to a change in scientific knowledge? Since it was once thought that light came from the eye, bouncing off objects and allowing us to see them, obviously this passive perception idea was not always with us. Maybe when this was discovered to be untrue, the accompanying attitude toward perception was that it was a passive process involving no distortion.

More than interesting conversation. I'm with you, I think. In lieu of any "direct access realism", the issue of whether there is a "reality" independent of our senses is an open one. Or, maybe more significantly, its an experimental question in the sense we talked a bit about in class: is there the kind of coherence in the summaries made by individuals over time, and among the summaries made by different individuals, which would suggest that there are stable things outside the self which are giving rise to its experiences? In general, the answer would seem to be yes, but with two important provisos. The first is the presumption of an external reality is an hypothesis, and, like any hypothesis, can be supported by additional observations but never proven. The other is the issue of how good our understanding of external reality is at any given time, and whether it "improves" over time. I think the answer to the second question is yes, in the sense that summaries are progressively of larger number of observations, but I'm not entirely sure (the collection of observations might be systematically distorted). Others have argued that there is some detectable convergence in the summaries over history, suggesting they are getting closer to something "real". I'm not entirely sure whether I'm comfortable with this argument (for the previous reason).

Why the presumption of passive perception is a very interesting question I hadn't thought about. Nor about your point that some earlier theories of vision had a much more active character, which helps to locate a transition in cultural time. In general, my sense is that presumptions of "passiveness" in thinking about the brain tend to correlate with the rise of "rationality" and that in turn with industrialization. The former may seem somewhat paradoxical, but relates to the wish to be "neutral" and hence "authoritative", the latter with the notion that humans are malleable (and hence transformable into cogs in machines). Anyhow, very much worth thinking more about. Thanks. PG


Biernat

Perception and Summation

The brain actively constructs one's perception of reality (i.e., the "picture in your head") by combining a variety of information obtained from several sources. While some sources of information may be used more frequently than others, no one is used to the exclusion of all others. This is reminiscent of the previously discussed summation process carried out by neurons, as well as, at another level, the use of motor symphonies in creating actions.

In neuronal summation, a neuron receives input from a large number of other neurons. That, in addition to information contained within the neuron itself, determines whether or not the neuron will fire. The input from any single neuron is negligible; only the summation of all the neurons' inputs is effective in affecting the target neuron. Of course, some neurons may have more impact on a target neuron's firing "decision" than others, (such as a neuron that might have a particularly high firing rate), but it is ultimately the effect over all the contributing neurons that is important.

Similarly, the motor symphonies involved in creating actions are a summing effect. Rather than having each action created by a single neuron, the action is produced by a pattern of activity across a series of neurons. Given the alarming rate at which neurons die, and the fact that they are never regenerated, this is a comforting fact. One would be in sad shape if one's walking neuron, or speaking neuron, were to be the latest of the brain's casualties.

The constant recurrence of this summation theme warrants further thought. The system obviously works well enough, but it is interesting to consider if it would be even more effective with a "boss" to oversee all operations. After all, the lack of a general overseer with a final say-so is generally counterintuitive--probably because such a structure does not seem to exist in the United States society. It seems that all companies, while they may have a board of directors, have a chief executive officer. In the United States government, while citizens may be comforted by the fact that power is somewhat distributed through the existence of Congress, there is still a President who usually has the final say.

Upon further consideration, though, at least in the two examples given above, one can see that the summation effect is not as alien as first thought. One could see the House of Representatives as the equivalent of one neuron, the Senate as another, and the President as a third. All three contribute to decision making. Without the input from the House and Senate, the President would never have been "turned on" (no pun intended). That is, the President only makes major decisions on issues that have been brought to his attention by the other two bodies. At another level, the President could be regarded as the brain of the country, making decisions based on the input (popular opinion) he receives on the issues. (At least, this is ideally how it should work.)

Following the aforementioned observation that neurons are constantly dying and are not regenerated, it therefore seems best that there be no one "boss," in perception. If perception was exclusively dictated by the left eye, if the eye was lost for some reason, the result would be devastating. Therefore, while a summation system may at first seem alien, upon closer examination, it is more common than it appears, and seems to be a very sensible mode of operation.

Interesting appropriate sensory/motor connection, and extension to interacting humans. The latter may be even more apt than you describe: certainly the writers of the constitution did NOT have in mind a President as the BRAIN of the country, that's why they designed a check and balance system. Now what's worth thinking about is whether they had in mind a way to assure function despite occasional assasinations or whether perhaps instead there is some other reason for distributed control systems, both in democracies and elsewhere. PG


Bostick

I am once again intrigued, although I'm not sure that I can come up with anything interesting, I do definately like this section on the retina. The fact that the brain takes periphery information and fills in the hole in our vision intrigues me. Im not sure if it's based on a similar process but this reminds me a lot of decision making in the face of a new situation. The brain is taking the information it receives from around the hole and uses it to make an educated guess. This is similar to being in a new situation, taking the given information in the form of action potentials and filling in the blank. Furthermore, if the brain is able to do this with one spot in the vision, is it possible for the brain to fill in small holes that occur in people who have partial blindness? So I guess I want to know if this is a specialized system or if it is something all the ganglion cells are capable of doing? Depth perception was another topic that I found interesting this week. It is obvious to me that depth perception is a complex system which utilizes many processes. What I would like to know is how much is learned and how much is genetic? I know from life observations that some people have better abilities to judge depth than others (extremely obvious when skiiing). Corrolary discharge and the calculation of binocular vision would appear to me to be highly genetic. Painters cues seem to me to be a completely experiential and therefore learned ability. By a certain age most humans realize that the same object will be smaller farther away and bigger up close. As well as objects very close to a moving object like trees are going to appear as though they are moving very fast as opposed to objects far away, like the moon. However, I do not think this is the only part of vision that is learned. It would seem to me that if Central Pattern Generators can be prouced and not just born with, that this could include depth perception. Going back to skiing where depth perception is vitally important and also quite difficult, I postulate that proffesional skiiers train their eyes to determine depth with much higher accuracy than most people. By numerous repititions they might create a CPG which required less input or signals before it generated the three dimensional picture in the brain. Michelle K. Bostick

Very nice set of questions/curiousities. Indeed, the "filling in" is a generalizable concept; we'll see more of it in the visual system today and I like you extension to problem solving in novel circumstances. And the genetic/experience issues are very real, nicely posed in recognition of different mechanisms. Interestingly, there is pretty strong experimental evidence for an experience component in binocular vision (as well as a genetic component), and my bet would be that many painter's cues are not as dependent on experience as one might think. There's an interesting old literature on people born blind who recover vision later in life, as well as a quite active modern literature on development of depth perception in human infants. I would bet the painter's cues issue has been explored in the latter, but don't know that for sure. Let me know if you'd like to look into it and I can suggest some places to start. PG


Bourgeois

I have heard that science has become like a religion in our society. What the scientists proclaim, everyone unquestionably believes. What I have recently been learning in class about the brain and how we interpret incoming sensory signals, however, has required more than just a belief for me, but an almost mystical treatment of the information to accept the "facts" as true.

Let me explain. We know very minute details about how an image focused on the retina is transmitted to the brain. For example, the ganglion cells in the retina have the unique characteristic that they detect edges of images thanks to the input from excited photoreceptors noticing the abrupt changes of light intensity on parts of an image. That in itself is awesome and hard to believe. My first gut reaction is to say, "yeah, right, as if it could be that black and white" Especially since when I look at things, I do not seem to have any conscious connection with my ganglion. Then, science has told us more general grand schemes of visual interpretation, like the four ways the brain has to judge the visual distance of an object (binocular parallax, lens accomadation, motion parallax, and painter's cues). So, we are told that information is being processed at two levels, at the very smallest cellular level, and then at a level of conglomeration which uses the action potentials produced at the cellular level to assign value to the information coming in.

Maybe my biggest problem is connecting these "facts" of what is happening within my nervous system which I do not experience consciously, that is, I am not aware of any of these events in my nervous system, to my experience of seeing. MY experience of seeing is completely devoid of ganglions firing or of lens accomadation. It requires a leap of faith to believe that when I look at my surroundings, my experience is just a bunch of chemical reactions. In fact, I think that today, my threshold of scientific fundamentals has been reached, and I will not believe. Today, I prefer to live in doubt, and in a self-imposed blissful ignorance of all that scientific garbledy gook.

Your "choice" (a subject we will get to). But, in the meanwhile, you've put your finger on a very interesting and significant issue. Yes, none of what we've talked about in relation to visual perception is experienced directly by you (or anyone else) as part of visual perception. Which is to say that it is all going on in some boxes other than the I-function; the I-function only gets the result. On the other hand, much of what we've talked about is indirectly experienced by the I-function, in the form of various visual anomalies and illusions which one might (or might not) choose to be curious about. Science as religion as TRUTH? Surely THAT's not the lesson you take from thinking about the brain. Living in doubt is much closer to the reality. PG


Chiu

According to Freeman article in Scientific America, different, orderly forms of chaos exist in many human sensory systems. Since many sensory systems send their input to the enthorhinal cortex where the signals are combined; we have a basis for our focus. A statement made in class, to the effect that what the brain makes up is more of a reality than reality itself touches upon this same phenomenon. In light of this, one begins to wonder: do the "realities" created by the brain have a bearing on the interpretations of the various other from different sensory systems? For instance, knowing that chaos is generated in the sensory area of smell and since smell is closely linked to our sense of taste, what sort of effect, if any, does one influence the other? Also, since our concentration has been primarlily on our sense of sight, how feasible would it be to correlate the effects between the various systems?

In regards to anatomical arrangement of neurons, I was surprised to find that at the optic chaism of humans, fifty percent of the neurons cross while the rest travel back to the occipital lobe of the brain. Again, this leads to the question of why the brain is arranged in an opposite-hemisphere dominant system. The fact that the amount of crossing over depends upon the organisms' field of view makes one wonder how much the organization of the brain depended upon the the way certain animals saw. Can a general trend relating to this question be found among the various higher orgainisms? If so, does the same hold true for lower animals without highly developed neocortical structures also?

Another point of interest which was brought up in the Freeman article was the supposition that the presence of chaos in the brain was what separated it from the various artificial intelligence machines currently being created. If the ability to function with chaos in the human brain were eliminated, would that be the end of all the fast, apparently random associations made in the human mind?

Several interesting issues, worth persuing further. Clearly different sensory systems influence one another, but I'm not sure I understand why you pose the question specifically in terms of "chaos". On the other hand, there certainly is an interesting possibility that human creativity depends on chaos (or something even less orderly), and we'll talk more about that later in the course. On a more specific note, there are indeed some very interesting trends having to do with chiasmatic behavior in different organisms, and we'll talk a bit about this too. PG


Duffy


Fegutova

Last weekend, when I was getting ready to write my essay for biology, I realized that I do not have much to say about the topics we had discussed on previous classes. It was certainly very interesting for me to learn about the anatomy of eye, but the material seemed quite self-explanatory. I thought I would wait untill after the Tuesday class in hope that some controversial issues would come up or at least something that I could relate to my experience. And the class surpassed my expectations. First I would like to write about lateral inhibition. It makes sense to me now that our brain does not get information about the distribution of light (or sound, or smell), but rather about contrast. Our everyday experience confirms this hypothesis. At the same time, at least in the case of sound and smell, we can choose whether we want to get the whole picture, instead mere ÒreminderÓ of change. It is true that we do not hear all the sounds that we are physically capable of hearing. But is not it only because we do not pay attention to them, we choose not to hear them? Before the last class started I was trying to listen to all the sounds I could in the classroom. When I concentrated, I could hear a humming sound, different from when I do not pay attention to it. But I had to consciously concentrate to be able to do this. Then, I tried to ÒfocusÓ on specific peopleÕs voices. I was surprised to find that I could recognize voices of people who were sitting far away from me among the humming, which now stayed on the background. It is true that we only pay attention to contrasts under normal circumstances, but if we choose, we can get the whole picture. Now about lateral inhibition in seing. We have learned that the photoreceptorsÕ discharge can be excitatory or inhibitory. This fact, which results in translation of reality into terms that are more understandable to us, is called lateral inhibition. For example, we do not perceive the difference that proximity to light makes on the appearance of surfaces. If it was not for lateral inhibition, we would have difficulties recognizing objects as separate entities.This assertion seemed a bit controversial to me. Many times we see shaded surfaces and we can still conclude that we see one object. Even if our sight betrayed us we can still recognize objects by touch.

Further, I would like to argue against the connection to Plato made in the class. Plato , as I understood him, certainly believed in the ideal forms, but these forms were not based on the appearance of the object. His point was that the ideal forms can have representations of various appearances, but we still know that object of any appearance still corresponds to the ideal form of this object. Another fascinating question brought up during the class was the dark adaption. I realized that I attepmt to speed up dark adaption by looking for edges. It seems like a good argument for lateral inhibition. I guess my problem is the assertion that there is a lateral inhibition in all senses. I can not come up with a single example where we can not choose to get the whole picture instead of contrasts (besides seing).

Glad we found some controversy. And very interested in your point: the ability to see things differently at different times (and to some extent by choice, an issue we'll get to). Being able to "choose to get the whole picture" does not necessarily mean an absence of lateral inhibition. What could in principle be going on is a difference in how much and in what ways the brain "fills in" between the edges. At the same time, in order to make clear the controversial point (the attention to edges), I overstated the actual facts. Some less "edgy" information does indeed get through the retina (and other senses), so we're both right on that point. As for Plato, I agree he did indeed argue that there are ideal forms which we see in various representations. For him, the ideal forms existed outside the observer who was provided only with the imperfect and varying representations. My argument is that Plato was basically correct except that he mislocated the ideal forms. They instead exist in the brain (reflected partly in lateral inhibition networks), and are why we see discrete entities instead of the more imperfect and varying representations. That make sense? PG


Feinberg

Although you have previously stated that not everything in the body has an advantage to it, the phenomina of lateral inhibition has developed many times, in many different parts of the body. Why? what is the advantage of it?

It is very doubtfull that humans ever lived in a world where edges were not important, so the lateral inhibition network is very needed, but why have the system of having the brain fill in areas between edges? It seems to me like the brain could be so much more exact if it wanted to/needed to. Because of this mechanism, we are unable to see spectrums of brightness change, it appears as pretty much all or nothing. I would think that the brain would want all the information possible to make a decision, and not just limit itself to the edges. I am sure the brain could develop a system if seeing more was more advantagous, so why hasn't it? It really makes little sense.

And the idea that what our brain sees as reality is more real then reality makes no sense. How are you defining reality in this context? And who is to determine what is reality, when we change it by observing anything.

I don't know. I guess I don't really have that much to say because I am sort of confused right now, talking about what is or is not reality. Sorry.

My fault, in part. Hope its clearer after the last couple of lectures/conversations? "Filling in" by the brain makes it possible for you to see things as relatively stable objects more independently of variations in the circumstances under which you see them. And, assuming there really ARE objects out there, what you see is, because of that, more real than the information you actually get at any given time. That any better? PG


Grant


Gureja


Ivashchenko

We have spent a lot of class time discussing the ambiguities in the processes of perception and their effect on our conception of reality, yet it appears that notwithstanding the complexity of the ways in which we see, we have a very coherent and clear picture of everything we need. The reality (which can be interpreted differently according to definitions given to it, but here I will talk about my own subjective reality – things are real to me as I see them) we perceive provides all the information we need to survive (hopefully). In order to construct any image, brain requires a large amount of information about individual points in it, and then more about the spatial relations of these points, and even with all of it, the accuracy of the picture is questionable as the process of organization is inherently very complex. Then also, the fact that information about many of the points in the picture under construction is often unavailable to the information processing centers implies further ambiguity. Though now more of the processes in which picture on the retina is perceived by us were discussed, the degree of difference is between the original and what we see is still difficult to assess.

The observations made in class about the lateral inhibition phenomena saying that any given area of photoreceptors subjected to light is excitatory to the ganglia directly in front of them ( ganglia are firing at some specific rate to begin with) and inhibitory to ganglia on either side of the cells straight in front of the affected photoreceptors (lateral cells) can be seen as adding to the ambiguity of the picture we get. Lateral inhibition implies that not all information about a given source of light is considered in the making of an image. Another interesting observation made in conjunction with lateral inhibition was that the light intensity, for example bright and dim areas in the picture cause the same frequency of firing rate from the ganglion cells. This rate of firing changes only on the edge between the two areas – ganglion cells are reporting to the brain location on the retina with a change in light intensity. Why the nervous system uses edge detection is probably very difficult to explain, but there is one obvious reason – for clearly separating the object in the picture from the surroundings. Seeing a white object on a white wall, for example, might be hard if the eyes have no efficient mechanism for detecting the sharp and yet not very obvious dividing lines between the two. We can also speculate that without edge detectors, the amount of information which isn’t really needed for the construction of the picture, but is still being sent to the higher processing centers would be very large. Edge detector could also be used as a sort of a focusing device – allowing the clear resolution and separation of the information from various sources of light – points in the picture. Therefore, they have an important organizing function.

Nice summary of where we are, and some pretty sophisticated thoughts about several functions lateral inhibition might serve. It probably does all of them, and one additional thing we'll talk about in lecture today. Yes, there is complexity (and ambiguity) in seeing, and of course it works (or we wouldn't be alive) but there are some other things going on as well. PG


Lee

As I reviewed my notes, I realized that the phenomenon of lateral inhibition is a result of the interconnectedness of cells (small boxes), which is an essential property of the components of the nervous system. Simply, lateral inhibition is when the activity in one cell inhibits the activity in the neighboring cells. Furthermore, as discussed in class, lateral inhibition leads to edge detection or enhancement. In the graph exercise done in class, the activity (for firing frequency) fell, or sloped downward, when it was dark, but rose when it was light. So this indicates the presence of an edge. Basically, the contrast of the edge (between light and dark ) was enhanced due to the lateral inhibition.

An example of the above is when we look at a gray scale. Each strip is uniform in intensity, but we think we see otherwise. It seems as though the entire scale is not uniform. Instead, it seems like the image is composed of different intensities. It is the eyes' processing mechanism of lateral inhibition (leading to edge detection) which causes us to see the image in such a manner.

I would assume that the amount of inhibition can most likely vary. So then I would think that the larger the amount of inhibition, the more contrast there would appear to be for the edge (since this is an inhibitory system).

However, I was wondering if time plays a role in edge enhancement. For example, the longer we look at such a uniform image, as describe above, would it look any different over time? Also, how does, if it does, time delay affect the process of lateral inhibition and edge enhancement?

Yes, on both counts. Lateral inhibition an example of interacting boxes. And DOES help to explain why some things LOOK different than they should given the light coming from them (will see more of that in class today). And still more, there are indeed time varying effects too (though won't talk much about them, could point you to some readings if you're interested). PG


Lew


Neimark


Newman

I was reconsidering some of my conclusions on perception due to our last class. The argument that the reality of what is in our head, what is made up, makes better sense than what is truly out there is a very interesting notion. Since what we are always seeing is a combination of the motor sensory input, picture on the retina, and the brain's interpretation, the earlier statement would seem very true; however, part of me has difficulty with assuming that everything the brain could be "seeing" is real.

We all dream yet how can we say that dreams are real. We preceive them in our head but they are not substantial enough to touch. This idea is furthered by a statement brought up in class, that if someone said that he/she saw a UFO, it would be his/her reality. I think that this is an interesting notion because in this case there would/should not be a point source of light reflected onto the retina to make a picture or even any motor sensory information. However, the brain has an image that it has created, telling this person that he/she had just viewed an UFO and this image becomes the truth to this person. In his/her perception, UFO's exist and a facet of reality. His/her brain has made up a reality yet is that reality a better notion of what it actually in reality.

Such ideas lead us to a very interesting part of humanity. Where do we judge the cutoff to each person's individual reality? Where do psychosis come into play? While reading Bessie Head's _A_Question_of_ Power_, it became very difficult to distinguish what was truly happening to the main character and what she was imagining; her perception of her life was truly marred wherein she saw everyone and everything as an attack, even when it was the opposite. This seems to happen a little too often in life, when someone's idea of reality is skewered from the rest of the world's. Still, is there anything wrong with having a different take on life or only when it becomes harmful to that person or others? The fact that each brain is different, and, thus, how the brain would interpret what it is viewing makes sense. However, when exactly do these different views cause problems?

Very interesting, appropriate set of considerations. I don't know the book, but it sounds interesting/relevant. We'll talk more about dreams (and psychosis), but they do indeed go together with the lessons from talking about visual perception. Can you imagine anything not only not wrong, but even RIGHT, about individuals seeing things differently? And maybe even with one individual seeing things differently at different times? If so, how do you supposed one might resolve the differences in a productive rather than destructive way? PG


Perkins

Why edge detectors? Well, to begin with, the detection of edges is one of the more important aspects of building a picture. Edges allow one to form objects of a big mess of differing intensities. Edges are of practical use in that they indicate possible changes in distance of objects in the visual field.

Why the particular setup of edge detectors. Our most rudimentary picture consists of a map of intensities determined by the rate of firing of ganglion cells. It would make sense that the brain would use the given information to construct contours and edges. A drastic change inthe rate of firing of ganglion cells indicates a large change in the intensity of the image. This change is probably indicative of the fact that one image of a particular intensity is ending and another image of a different intensity is beginning - the transition being an edge. It would be relatively simple for a system to detect changes in firing rate, and thus finding edges is made quite easy. The photoreceptors are specialized to determine the amount of light (intensity) and send the information on to the middle nuclear cells, which code the information as excitatory or inhibitory tothe ganglion cells. If the ganglion cell receives information of uniform brightness covering the entire receptive field, the cell will only fire at base rate because the excitatory and inhibitory parts ofthe receptive field will cancel each other out. If different brightness information goes to the excitatory and inhibitory parts of the receptive field, the cell's activation will not summate and firing will be at above baseline. This increase in firing will indicate an edge if it is enough above baseline firing rate, thus indicating a sharp change in contrast.

Perhaps the nervous system uses this system (or something like it) to detect edges because it is simply based on the brightness information gathered by the photoreceptors. It uses relatively "primitive" information and doesn't require an excess of work on the part of the visual system.

Pretty good description of what it does. A little less clear on its implications, and what it gets one. Why "simpler", "more important"? What inbuilt assumptions makes edges of "practical use"? PG


Rayburn

There is a picture in our head that is created in large part by the nervous system. Since sensory input is often ambiguous, the brain interprets the incoming information. This explains why a single event could be interpreted differently based on individual's unique nervous system. How a piece of art or music is perceived can vary greatly from person to person.

Is the picture in our head confined to a specific experience or does it represent our perception of reality? It also seems that the picture in our headis determined by things outside the nervous sytem. What about collective consciousness? How a person "sees" a specific event may depend on a specific set of beliefs he or she holds. A religious fundamentalist may see the murder of a physician who performs abortions as justified. Why are there such similarities among groups in the interpretation of events? The media constantly reported the polarization between whites and blacks on the Simpson verdict. It is difficult to reconcile how two individuals may have a completely different perception of an event while of groups of individuals may have similar interpretation of particular event.

Does sensory input sometimes play a more significant role than the nervous system in the creation of the picture in our head? A religious cult is an example of an outside influence controlling someone's interpertation of the world. What is the interplay between a David Koresh or Jim Jones and the nervous system in the individuals' perception of reality?

Nice set of questions/extensions. Though, be careful. Do you really mean "picture in head is determined by things outside the nervous system"? Presumably the set of beliefs a person holds are somehow things inside the nervous system, no? And then the questions you're posing become, how come different sets of people have different brain organizations in common? And how does it come to be that some brain organizations (Koresh, Jones) are particularly effective at influencing other brain organizations? If you're comfortable thinking about it that way, I (or you) can suggest some likely answers. PG


Shively


Simpson

The brain is constantly bombarded with a plethora of visual signals. Some of this information is selected for closer scrutiny, while other information is ignored. Discerning and determining these Òobjects of interestÓ from a cluttered environment is both a particular and a peculiar process. It implies focusing on some things at the expense of overlooking other things.

People suffering from damage to the posterior parietal cortex on the right side of the brain tend to ignore objects in the left half of their visual field. These people can see the objects but are unable to recognize the objects. Some experiments suggest that cells in the posterior parietal cortex discharge just before eye movements. The retina is capable of ÒdescribingÓ the visual world to the brain, but it is only with the movement of the monkeyÕs eye towards an object that a more enhanced picture of the object can be registered. With an enhanced response, more nerve cells will focus on the particular object. This change, though, appears to happen even if the selected object is Òno more striking than the rest of the visual field.Ó

I am fascinated by these findings and by the puzzle they leave us. Who or what ÒdirectsÓ the eye? It appears that in order for an object to acquire meaning and to stand out the individual must focus his eye on the object, but is it not also true that it is not until the nerve cells have increased their activity and response that the individual is able to discern the object as a separate and important entity? How exactly does the brain interpret visual cues and how does this interpretation elicit a personal response or action? If an enhanced response in the brain is activated even if the object Òis no more striking than the rest of the visual fieldÓ is our nervous system not acting for and in the place of the individual? Does this then support the notion that the brain and the nervous system provide a more accurate portrayal of reality?

Very interesting set of questions, which I wish we were going to have more time to talk about in the course (there is a large and expanding literature on "attention", which you've found a piece of). At the most basic level, "is our nervous system not acting for and in place of the individual"? Not "for", but certainly "in place of" assuming our brain=behavior equation is correct. And that then implies that to account for enhanced brain activity does indeed require an understanding of not only what is (or is not) "unusual" about parts of a retinal image but also what particular brains are "interested in" and what that means in neural terms. I'm not sure that connects to a "more accurate portrayal of reality" but it certainly does to a more personal one. PG


Timberlake

Now that we have acquired a better understanding of the nature of the picture in our heads verses the reality of the world around us, we are charged to consider what this tells us about our perceptions and how they come to be in our brains. We now know that our perception of a visual picture forms in our brains as a collection of borders or changes in contrast with areas of constant light intensity in between. The ganglion cells on our retinas effectively ignore a tremendous amount of information from the photoreceptors, forwarding to the brain only information about changes in intensity rather than information about the large areas of constant intensity in between.

As I understand, Dr. Grobstein has asserted that this method of perception enables us to better know reality; recognizing a wall by it's "wallness", which in this case I suppose is a large area of relatively constantly reflected off-white light. Without this method, the many tiny variations in intensity of light reflected by the wall would be too confusing for our brains to recognize.

I am inclined to have a somewhat different view of the reason behind why we use this particular method of perception. Perhaps my reasoning will turn out to be an alternate way of presenting what Dr. Grobstein has, but it is not completely clear to me that it is. I appeal to Dr. Grobstein and others for assistance in resolving this.

Perhaps it is dangerous to compare our brain to a computer, but certainly the two have similarities. They both work via digitized data, have thousands of electrical connections, and both have finite assets (although the brain has numerically greater assets with 10^13 neurons versus a Pentium's 3x10^6 transistors). My take on why the brain would use only information about contrast in building a picture is simply because it requires considerably less processing power from the brain's assets. Consider a rectangle, half white, half black. A computer could be programmed to display it on a screen via two methods. One would be to send information to the screen about every single pixel forming the rectangle. Depending on the size of the rectangle this could easily be many thousands of bits of information all which would have to be processed and sent separately. The second method is considerably less demanding in terms of data moving and processing and is similar to how our nervous system paints the picture of the rectangle in our heads when we view it. This method would send information about the size of the rectangle, the location of the border between white and black, and which side is to be white and which is to be black. This would amount to only a few hundred bits of information at most. Much time and processing power would be saved over the first method.

Is this not the reason our brains create a picture of the world in this way? It simply requires considerably less processing and memory assets than a picture formed by the input of every photoreceptor. This may not have only been an evolutionary advantage, but a requirement due to the physical limitations of what our brains can process. Have I just restated what Dr. Grobstein has presented in class, or am I completely astray here? Any clarification would be appreciated.

Thank you for usefully sharpening the issues. Yes, one might argue that lateral inhibition is simply a data compression mechanism, and let it go at that. But that ignores two important things. The first is that it is a data compression mechanism which is appropriate under some circumstances but not under others (the clouds of venus or the still more intriguing situation of entities which are distinct in non-spatially localizable ways). In this sense, the existence of the data compression mechanism (whatever its actual evolutionary history) represents important genetic information about the character of "reality": it is of a sort in which one can survive using a particular data compression mechanism, one with (at least some) important, spatially discrete entities And that's the other important thing. The information falling on the eyes of individuals DOESN'T actually provide very good evidence for the existence of spatially discrete entities: it is too variable both from different parts of such entities and over time, as illumination changes. And yet we know (or believe we know) there are discrete entities in "reality". It is the lateral inhibition networks (among other things) which give us the insight (or belief) that there are spatially discrete entities, and hence access to what (we believe) is a reality more stable and fundamental than the much more fluid and nondiscrete information we get about it. Does that help any? PG


Vero


Waldrop

The scorpion has highly sensitive receptors in its legs that tell it the location of prey moving in sand. These receptors are used instead of sight and hearing by detecting the movements of prey by the vibrations in sand. Since this method is effective for scorpions, other animals must have sensitive receptors in areas other than their eyes, ears, etc. Humans can feel an earthquake, but cannot determine the epicenter. The scorpion is so sensitive, it knows from which direction the vibrations are coming. It could be this is not unusual, but seems so because humans are so insensitive. Farm animals exhibit strange behavior before earthquakes and they know when it is going to rain. They have receptors that can combine the information like humidity, barometrec pressure and it leads to response. Humans either do not have the receptors that are sensitive to these conditions or more likely they are sensitive to the conditions but do not form a central pattern generator to create a response that a human is consciencely aware of.

Good summary of reality that perceptions of reality necessarily different in different organisms. And an interesting issue. What makes you think it is "more likely" that humans have receptors but can't make a response they are "consciously aware of"? There are several interesting dissociable ideas in the latter. I'll bet you there are some receptors other organisms have which we simply don't. On the other hand, I think it is ALSO true that there are sense we have of which we are unaware, and perhaps actions we take because of those senses of which we are generally also unaware. Can you provide likely examples? Which is to say, can we be MADE aware of such things? PG


Yi

After learning about the picture in the retina verses the picture in the brain and how the brain is a synthesizer, I am beginning to understand the hesitancy that I feel when I am drawing. For example, when I am drawing a figure, my eye seems to be telling me one thing and my brain is telling me another. For me, every art class is a battle because I can't draw exactly what I see.

Very interesting. I'd like to hear more about it. If you can describe the differences more, it might lead to some very interesting ideas about how the brain works. PG


Back to Neurobiology and Behavior Home Page




| Forum | Brain and Behavior | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 10:52:53 CDT