Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.
Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!
We started by wondering whether simple things interacting in simple ways could learn. And the answer is yes. So its certainly not true that computers can do only what they are told to do. Or, at least, its not true that one has to tell them explicity what to do for every example of what you want. You can give them a general set of operating instructions, and a few specific examples, and the computer will not only learn the specific examples but use these to itself create a rule, a categorizing scheme, that it can apply to additional cases.
What's particularly interesting is that the rule the computer creates may or may not be the one you had in mind. You might have had in mind that the shorter and thinner something gets, the more it should be called a rabbit (like one scheme the computer came up with), but the examples experienced are, for the computer, equally consistent with most things, even quite short and thin things, being elephants. This may seem silly, but it actually says something quite important about how many different solutions there are to particular problems, about the extent to which experience can account for observed generalizations, and probably about brains and people as well.
So:
Is it a good thing or a bad thing that different individuals may learn different things from the same set of experiences? That depends, of course, on one's perspective. If one wants uniformity out of an educational process, its not so good, in which case one ought to try to make not only the learning experiences for everyone but also the starting conditions for as similar as possible. An alternate perspective is that there are lots of somewhat different ways to do the same task, and people who learn to do it differently from one another can as a consequence subsequently learn additional things from each other (as they might learn from elephants or rabbits or computers).
That learning depends not only on particular experiences but also on pre-existing structure has some significant broader implications as well. In the sense in which it is used here, learning, with its accompanying creation of somewhat arbitrary categories, is not, of course, something which occurs only in a classroom. It is instead the basis of most (all?) human understanding, including scientific. As William James put it, in his 1890 Principles of Psychology: "What we experience, what comes before us, is a chaos of fragmentary impressions interrupting each other; what we think is an abstract system of hypothetical data and laws". The former are the experiences from which learning occurs, the latter the categories one uses to make sense of "reality". The issue is how one gets from one to the other, whether the categories are inherent in the experiences, and hence a reflection of the "real world", or are something else. James answer: "Every scientific conception is, in the first instance, a 'spontaneous variation' in someone's brain. For one that proves useful and applicable there are a thousand that perish through their worthlessness. Their genesis is strictly akin to that of the flashes of poetry and sallies of wit to which the instable brain-paths equally give rise. But whereas the poetry and wit (like the science of the ancients) are their own excuse for being ... the 'scientific' conceptions must prove their worth by being 'verified'. This test, however, is the cause of their preservation, not of their production ..." In short, the validity of science and related forms of advancing human understanding of external "reality" depend fundamentally not on the purity (or "objectivity") of their observations, but rather on the continual testing and retesting of generalizations which have, at their root, an origin in arbitrary categories.
Is there ONE kind of network, yet to be found, which could itself account for all kinds of learning? No one knows, of course. But some of the conclusions we've reached here may be relevant in thinking about the question. Learning is something which results simply from experience, but instead something that results from interactions between experience and pre-existing structure. It seems likely that different pre-existing structures (in terms of both network architecture and learning rules) would affect the ability to learn things (just as the pre-existing synaptic weights affect the categories created). If so, the bottom line may well be that there is no single, optimal learning network, and that "learning" itself will always need to be understood in terms of a series of different networks optimally structured for different tasks. And this, of course, presumes that one knows in advance all learning tasks. If learning itself generates new sorts of things to be learned, one can certainly confidently expect (and happily so, if one likes open horizons) that no single network will ever "solve" the learning problem.
Rosenblatt's Perceptron Learning Algorithm, a Java implementation which allows exploration of variations in learning parameters
Neural Nets, on line version of a book by Kevin Gurney, Psychology Department, University of Sheffield, United Kingdom
An Introduction to Neural Networks, by Leslie Smith, Centre for Cognitive and Computational Neuroscience, University of Stirling, United Kingdom
Neural Computing, course notes from Department of Electronics and Computer Science, University of Southampton, United Kingdom
Backpropagator's Review, by Don Tveter
Showcase, from Intelligent Financial Systems Ltd, includes examples of practical neural net use and some Java tutorials illustrating back-propagation networks.
FAQ for comp.ai.neural-nets newsgroup
Links related to neural networks and other simple interacting systems capable of learning are available from Artificial Life On Line
A Brief Introduction to Genetic Algorithms, by Moshe Sipper, Swiss Federal Institute of Technology
The first few chapter of James' Textbook of Psychology are available on line, together with the texts of several articles by him, at Classics in the History of Psychology, from York University, Canada
William James, an extensive web resource by Frank Pajares, Division of Educational Studies, Emory University
Mind and Body: Rene Descartes to William James, by Robert Wozniak, Department of Psychology, Bryn Mawr College
Science Education, from Serendip
I'm intrigued. Can I go back to the beginning, please?
| Forum
| Complexity | Serendip Home |