Radu Luchian(ov): Pastime: Commentary
On Theory

(sparked by discussion on the cog-grads list and in the Research Seminar
of the PhD program at Carleton University.)

"Teoria sine praxis e cum rota sine axis."
-- I'd appreciate a reference for this one :)

Theoretical awareness, which I can also call formalism, is a continuum on which I can distinguish 3 discrete "kinds": [practice or applied research - uses observation and feedback], [informal/internalized theory - with no particular need for symbols or consistency] and [formal theory - uses both consistency and symbols]. And then there's [dogma]: stuff that the people who uphold it consider to be theory, but being based on wild (unchecked) assumptions and arbitrarily interpreted evidence. So I consider dogma time badly spent, useful only as negative examples, background against which to contrast really useful theory.

OK, here's the debate at the highest epistemic level into which I keep bumping on my wavy road through Academe and the Industry: what's the relation between theory and practice? Between formal research and applied research? Ultimately between stuffy theoreticians and air-head engineers? In fact, it's one of the most important relations, the outcome of which shapes who we are and how we approach life in general, work, even entertainment. There's been a lot of ink spilled on the topic, mostly by theoreticians - since applied folk don't much care about it one way or the other as long as their contraption works.

Basically, that's the difference in attitude. Theoreticians see the world in the abstract, carefully splitting the hairs on the head of Chaos to make sense of the world around them. Applied folk take the world as it comes and concentrate their efforts in making the world work for them.

But there's a deep fallacy in looking at the matter that way. As the saying at the top of the page says, "theory without practice is like a wheel without axle". And that works both ways. An axle ain't much good without its wheel either. Applied researchers, like Rodney Brooks, who say they can do cognition without representation, or nut-head engineers who think they can tinker with their toys without the need for theory, are wrong (we'll see why below). On the other extreme, way-abstract theoreticians, idealists, who say the world's in our heads may be right to a certain extent, but no matter how we interpret it, we still bump into the world: there are laws out there that any amount of theorizing, hand-waving, democratic decision or agreement in groups of thinkers can't change.

The fallacy I mentioned (about looking at the world completely theoretical or completely pragmatic), comes from that fact that we are communicative social critters. From the moment we learn the meaning of "WHY", we can't help but use some sort of theory in whatever we do. Whether or not one takes on board the goal-directed action paradigm - in any of the ways it was expressed throughout the ages - one still has to explain to oneself and others one's actions at a particular time. Otherwise the one is considered uncooperative by its peers, unreliable by society and either left alone to pursue their chaotic meanderings through life, or placed in a nuthouse. Suffice to say, any action is directed by some form of implicit theory, often very complicated ones: you go to work because you are interested in what you do and/or expect to be paid in order to procure the resources to survive and do what you are interested in. You get dressed in the morning because you think people would call you immoral if you go out naked and/or because you know it's cold and you don't like it. These seem simple theories (some won't call them theories, but 'just' motivation), but they are only superficially simple. Anyone who has ever tried to model such behavior formally (logically, in a computer program, etc.), knows that in order to ground them in any way there are more dependencies apparent every time one looks at the problem.

So since the times we lived in caves and started using tools and drawing conclusions about the systematicity of the world, we've been using theory. That leads me to think that any entity who can learn and adapt to its environment (animal, even plant life and our silicon contraptions), is using theory at some point on the continuum I described at the beginning of this text. Sets of consistent representations linked in causal chains that can be used to understand current observations and predict future ones. Informal as it is, internalized theory is what we use naturally. Then, together with the writing systems, along came the formal theorists. They started recording bits of informal theory and comparing them against each other, finding contradictions, then refining them. This is the process of Science: recursive, incremental approximations which lead to more and more robust theories, covering more and more evidence. In my opinion, the ultimate formal tool is the utterly abstract field of Mathematics. As I look behind at my own interpretation of the history of Science, I can see how subjects of Philosophy and Religion (informal theories), have gradually been formalized in fields which are more and more mathematically (or otherwise consistently formal), tractable and preferably provable (both formally and practically): Physics, Chemistry, Biology, Geology, Astronomy, Psychology, Linguistics, Economics, etc.

But we humans, with our cognitive abilities running on relatively unreliable biological mechanisms, have devised gradually more reliable tools, culminating with the symbol-computational devices we call computers. At this point some people push the formalism rather too far, in projecting theory over the evidence. Philosophers have speculated and continuously pushed into public awareness the idea that our cognitive abilities are at some level or another perfect. Formalisms which do that are, among others: logic, internalized grammar, strong forms of cognitive computability, symbolic modeling, any form of cognitive engineering. To these I say: wake up from your theoretical dreams and smell the flowers of evidence. We suck at some forms of deduction, we have hard time communicating concepts outside of conventional domains, we're subject to memory decay and a large set of (multi- or unimodal) illusions, our learning is not at all consistent (we pick up some things after only one exposure and we never forget it, but for other things, even if we consider them important and spend hours, days, years studying we can't recall and forget very fast).

Now to take the other point of view. In fact, if you look at it from the right angle, theory is simply condensed practice. One learns best from experience, but we would be mad not to make use of the experience accumulated throughout the ages in theoretical systems.

And there's a whole other angle that makes Academe a woe for any Industry-minded, results-oriented person. Er... that would be it, results. You can never know when an idea will spark, or when a formal result will achieve the external validity needed for real-life application (since most formal research is conducted in the tiny enclaves Allan Newell was so upset about when he wrote "You can't play 20 questions with Nature and win").

Combine that with arbitrary deadlines (the term-and-year-based organisation of the learning community: study, conferences, etc.), all sorts of Committees and bureaucracy and I really wonder there's hardly anyone doing actual theoretical research.

But just because our social organization gets in the way, we should not look down on Theory. I vaguely remember reading a modern philosopher's writing On Necessity. Well, theory and practice are so well intertwined that attempting to do one without the other seems to be breaking some natural rule of necessity in the cognitive system we call mind.

Bake for 3 hours. Results may vary. :)

Then, once the menu's on the table, choose:

  1. . pure theory (armchair researchers, sitting on their bums, letting their mind roam systematically over an infinitely expanding abstract conceptual space. With no reference to the real world - other than introspection, proved by psychologists to be faulty - there's almost no constraint on what conclusions are reached. The few constraints that ARE left in the mind of the armchair researcher are the limits of the individual's imagination and the constraints internalized by the individual during one's own ontogenesys. Here's where I bunch the idealists)
  2. . "pure" research (lab researchers up to their throats in splitting hairs and operationalizing. In systems like the cognitive one, the variables are too intricate, causal dependencies too interrelated to be individually studied. We know that signals go almost from everywhere to everywhere else, neuroscience findings killed every hope the psychologists ever had of a neat top-down or bottom-up processing vector. Yet, people still sweat in cubicles over weird equipment that flashes lines, circles, text and sounds. That approach may be still good for psychophysics, but in studies of more complex, context-sensitive phenomena like decision-making or analogical transfer, it seems out of the question. I guess that anyone who ever really put their mind to operationalizing a hypothesis on cognitive phenomena knows that it's impossible to really control any experiment. If some researcher feels there was no problem doing so in their case, pray let me know the details.)
  3. . applied "research" (since there's no time to really keep up with the latest debates on cognitive topics when you have a schedule to keep up to, engineers tinkering with their toys pick up some old theory that sounds plausible to them and use it as the basis for their tinkering, or only as mere theoretical support)

I think the outcome of 1 depends on how much the researcher has its feet on the ground in the first place, and that sort of excludes it as a really scientific method. The last two approaches have their pros and cons. If forced to pick among these three, most of the time I'd personally go for 2. But I would be very lax in my operationalization and I'd allow for as much embeddedness and as much situatedness as possible.

Here's an example of applied research going bonkers: the spark that led to the miracle of the Web, HTML (HyperText Markup Language) and its accompanying "feet" (HyperText Transfer Protocol) was designed as part of a research project on SGML (structured graphical modeling languages), a formalization of one of the principles I kept in MonDoc under the title "content&form: separate editing of content and context". Anyway, the guy researching this at some government-funded think tank got exasperated with the lack of interest Academe shows systematically to any "screwball applied theory" and he started Web as a better indexing tool on the Internet than any of the ones existing at the time (newsgroups, BBSes, Archie, Gopher). The great idea was a more cognitively transparent method of presenting ideas (breaking the linearity of books and other documents, linearity artificially imposed on thought by communication speech and writing)

Big introduction, leading to this: Once the Web spread like wildfire in the connected world, all sort of "applied research" groups, gathered under the umbrella of the W3Consortium, started "standardizing" the new medium. Formalizing and expanding a simple an practical tool into -currently- tens if not hundreds of related standards. For those who don't know, standards are stuffy, mathematically formal documents that describe a simple idea on thousands of pages of self-serving formalism. Hey, humans are not machines! Why explain for humans concepts the way you have to explain them for computers? I say if you want the darn formalism, look at the code of the darn parsers! Duh.

Anyway, the convoluted point is that applied research doesn't necessarily produce real solutions.

Counterexample: XPARC and other think-tanks have generated oodles of solutions which even if neglected by the ones who sponsored the research, other companies have thrived on. The Apple computers spring to mind (the mouse and the WIMP paradigm of user interfaces).

[ Home ][ Current Projects ][ Portfolio ][ Pastime ][ Pseudonyms ][ CV ]