(Evolving notes from my presentation on Reason and Irrationality
for Rob Stainton's course on Cognition and Language.)
Irrationality provides a great body of evidence against the rationalist and human-centric theories (like the Language of Thought or genetically-encoded Grammar), and for the Dynamic Systems approach to cognition. Irrationality, IMHO, is the most important applied topic in CogSci, since it addresses Representation and the rest of the abstract tools that we use in all endeavors, including Philosophy itself, but socially more important and ultimate for our cultural and physical survival: Conflict Resolution, Education, Science. This topic goes a long way to defining us as human beings - and this is what it was (and still is) misused for, throughout our recorded history.
Overall, creativity and reason are the two opposing forces from which we derive knowledge. Creativity is the optimistic drive that comes up with possibilities, alternatives and general riff-raff. Reason is the pessimistic check on creativity, that determines which possibilities are plausible given specific conditions. All reason can do is find contradictions. There's comparably very little that reason-based traversals of the knowledge space can do to increase that space.
Reason without creativity does not lead anywhere. Indeed, without creativity, reason cannot exist. There have to be raw ideas for reason to sort through, and alternatives have to be generated when reason finds a dead end (contradiction), in a debate. The Renaissance was the moment when a good number of learned people opened the door of the flawed realm of the abstract into its source: reality. They started combining God-type ideals with Nature-like, much better anchored observations. People (mostly those who could afford it), started broadening their cognitive horizons, exposing themselves to varied forms and sources of knowledge. Liberal arts education had a start. In my opinion, the result was the explosion of creativity that followed, in all the domains of life (art, education, social engineering, science).
And here we are. Along the way (some) philosophers started neglecting the fact that symbols are very rigid place-holders for the concepts behind them. We don't know yet exactly how humans handle concepts, but given the evidence from psychology and neurosciences, it's a good bet to say that human concepts are far more "flexible" than recorded symbols. From what I know, fuzzy logic is the best approximation of higher cognition with which symbol processing could come up (and it does so in ways similar to the fields described by/with Dynamic Systems).
And reason in itself does not guarantee correct solutions. Reason can reach contradictions that are actually not contradictions, just gaps in our knowledge of the world. Also, reason can show that a hypothesis is incorrect, but cannot determine what makes it incorrect: the hypothesis itself or the causal chain that led to the contradiction. Copernicus' hypothesis that the Earth is round was reasonably contradicted in the context of the theory held in high regard by the learned world in his day. Consider the thought experiment of the 2D world: beings whose sensory apparata allows them to observe only a flat world, would have to find a way to explain their observation that certain objects (like a ball bouncing perpendicular to their plane of observation), tend to appear and disappear "out of thin air". To get back to my point, in order to function well, reason needs consistent input that covers all aspects of a phenomenon. Since our senses do not provide us with information from the entirety of the real world, reason cannot be expected to lead to correct solutions for the real world. All it can do well is to handle causal relations in imperfect models of the real world. Thought experiments and the like may well give us the feeling that they help us understand the world, but in fact, like any powerful devices, they are dangerous because they can pull us away from reality.
Analogy's a great example and has got a lot of attention from psychologists, AI researchers and cognitive scientists in general. The basis of a thought experiment, analogy takes two domains, sometimes wildly different on the surface but linked through perceived relationsips among the elements of those domains. Then, through a process of transfer from one domain to the other, elements of a domain can be hypothesized to exist in the other domain. [Example]. Most of Science has advanced exactly on such a basis. Thought experiments use analogical transfer as their final step. Once you see the relation or feature in the mini-world created by the thought experiment, it's easy for the author of the experiment to transfer that relation or feature in our world. But not all experiments are externally valid. Because most features and relations available for transfer between the domains are not "transferable".
[Address the strong and weak points of the other tools of reason]
That does not mean that I completely buy the picture drawn by the psychologists either. In their quest for observing individual phenomena in an ocean of interrelated, concurrent, top-down and bottom-up signal processing, they also fall in the trap of identifying the model with reality and they are unnecessarily "at war" with each other on the details of the boxes (modules) and processes involved.
Since I refuted both suggestions currently on the table ;), I have to at least suggest an alternative. The way I see it, human rationality is better described by the constrained field interactions in an n-dimensional abstract space continuously warped by context, to which a Dynamic Systems approach to concept formation might lead. Relevance theory, dealing with constraints instead of structures is the closest to this approach from all the ones I met in my readings. But that's a completely different topic. Back to the idea that's been floating around since the early Greeks started recording their thoughts on cured animal hides:
rationality is an ideal goal, not a description of actual human behavior.
As I see it, most of the reason why exceedingly normative "rationism" thrives in philosophical circles is the very faulty analogy of humans as computing devices.
Sam was asking us to think of "unreal" concepts for his PhD dissertation:
Here's the small list I cam up with:
My own research in concept formation (which I did while tinkering with MneMonic - my NL project), seems to be at odds with [Sam's] approach.
I would expect "unreal" concepts to be part of the basis for "real" ones. To put it simply, here's a sketch of the synchronic "story-of-WHAT" in which I believe (cos there's also a diachronic one, that helps explain the HOW):
Throughout our ontogenesis (even before birth, that is), we are subsystems immersed in a world of change. Some of these changes are accessible to us through our senses (and I don't limit the concept of senses to the receptors that anatomy is currently describing as such -- I like to keep an open mind :)
Our nervous system, as it evolved to allow us to make use of these changes, separates signals (consistently patterned change), from the rest of the environment. Change in the environment on the same channel of a signal is of course, noise. All animals make use of signals, since the environment-signal conversion is biologically automated at the level of the receptors.
But there are so many signals that storing all dependencies observed, while impossible in itself due to the infinite nature of the environment, leads to a continuous reshaping and specialization of the nervous system.
Now here is the gap in actual knowledge: signal or data to information aka. concept aquisition or [probably] belief-formation [since belief-formation has been philosophized totally out of joint]. Shamans like Fodor are trying to breach this gap with voodoo terminology like the LoT (just as his predecessors have warped the approximation philosophers called soul).
But back to what we do know (if psychology is to be trusted as science): we are wired to observe events and store them in multimodal structures to which some refer as imagery. But that's not all, we are also wired to pick up correlations in imagery and to store these second-level correlations in structures similar to those that handle imagery [cos nervous tissue ain't too diverse functionally]. This leads to a jolly recursive process from which most cognitive processes emerge. With a twist: that biological recording ain't mechanical. Some of the correlations made, have a chance to be off. The higher the level of abstraction from reality, the higher the probability of erroneous correlation. Whence we get erroneous or irrational behavior, inconsistent knowledge, creativity, insight and other effects, some of which we like, some of which we don't.
And back to our muttons, if I didn't bore you already. For many of our cognitive processes we use hypotheses. That is, we are wired to project some correlations over others, using what we know about the world in order to predict the outcomes of our behavior. All animals can do that to a certain degree, but humans exapted it to a strong survival trait.
And here we are. Hypothetical creatures. What would happen if we cross ideal symmetry with deer? We may well get horses with only one horn... What happens after we die? Since we don't like the idea of completely disappearing, we may stay around in incorporeal manifestations. And so on.
You never know how many hundreds of thousands of unreal concepts had to "die" in the interaction with reality for a single "real" concept to "survive". If the history of Science taught us something, it is that *all* our concepts may be "unreal".
I talked with Thomas Ward five years ago and he accepted this as a possibility for the lack of "creativity" his [untrained] subjects demonstrated. When asked to make aliens, people simply put together things they already know about; since their culture teaches them that the human is the only creature with intelligence, they simply warp the human frame to generate the alien frame.
I sort of generally agree with what you say, except for the part where you say that all our concepts are "unreal". The fact that I use humans to make aliens doesn't mean that aliens doesn't exist. Maybe the word you are looking for is "contingent"? :-)
"contingent" is too loaded, too vague, has too many meanings for what I mean.
I don't mean aliens are not real. I mean our concept of aliens, even of a specific alien individual cannot be a real depiction, but mere approximation, due to the nature of the process of learning (incidentally that's why I think the "type/token" dichotomy is misleading in any sense deeper than surface semiotics.)
Here are some of my beliefs:
(1) there is reality out there, and facts, and all the laws that we don't know or we're just approximating in the process of scientific investigation.
(2) we access that reality through our senses (specific channels that give us access to a small amount of the real signals)
(3) we build our concepts recursively based on 3 sources (sense data, convention, concepts) by using unreliable cognitive tools (biological memory with all the decay and cross-reference effects therein, inference, generalizations, etc)
(4) convention is not "real" in that it is "out there" (in the minds of others) only if it reached those others and was internalized the same way we internalized it.
(5) since the tools and two of the sources of concepts are not real, concepts are "unreal"
In other words, concepts are the closest approximation to reality that we reached at the moment. It's undeniable that our concepts change throughout our ontogenesys (even while we have a cup of cofee or a disturbing conversation), based on other laws than the laws of "reality". I am interested in exactly those laws of cognition following which we change our concepts in the continuous, life-long process of learning.Unrelated here, but I wanted a quick place to save this great comment on how come we're 98% ape: http://ist-socrates.berkeley.edu/~jonmarks/aaa/marksaaa99.htm
|[ Home ][ Current Projects ][ Portfolio ][ Pastime ][ Pseudonyms ][ CV ]|