Better than hunting down the theoretical makeup (of my implementation of MneMonic), from my archive of papers, email messages and presentation materials in order to put together a complete and coherent set of assumptions and mechanisms that make up MneMonic, I just point you to my current proposal for a dissertation. And on this page I'll keep the history of MneMonic. I do that here because I find myself repeating bits of it to people in order to give them the diachronic context needed to understand this project of mine. Cognitive architectures normally get born out of
Have you ever been told by a machine that you are unfair? A machine that's just been through several megabytes of Gutenberg project fiction, gathering statistical data on the occurence of terms and phrases, thus building up something you were hoping to be a concept network? As I was working on MneMonic I figured it is potentially dangerous, so I erased the prototype, encoded the architecture in parts of my archive and threw away(?) the decoding key in a file that I later refined as a bit of dizzy commentary. If I ever go back to MneMonic it will not be as a cognitive architecture, but I may use parts of it to get a MonDoc engine smart enough to know when to prompt the user for settings and when to use predefined defaults.
I developed MneMonic as a Computer Science project. The professors were not at all interested in cognitive science; actually they despised it as a lot of philosophical mumbo-jumbo. They were more interested in the actual recults accessible through a more applied form. They were willing to accept only Artificial Intelligence, of the weak type, concerned with functional results (replicating the function of [human] cognitive abilities), and with implementations thereof. Actual models (valid conceptually, psychologically and/or physiologically), were not at all their concern. As a result, my BA thesis contains only one page dedicated with my reasons for choosing specific intuitions, assumptions and solutions which I built into the hybrid architecture that currently makes up MneMonic. Things may have been a lot different if Tony Birch would have stayed on as my advisor rather than leave AUBG for some strange reason...
At the time I implemented MneMonic, I didn't know much about Cognitive Science; I coded based on hunches, presuppositions, operationalizations of encyclopaedic definitions of terms (like 'concept', 'meaning', 'term', etc.), lots of NN theory I found on-line, semiotics which I dug out from Peirce via Eco and lots, lots of incremental optimization. Beforehand I had an idea of Semiotics, DBMSes, I had played with Eliza and with backpropagation in NNs. And I had 'common sense' and introspection to rely on. Really good Computer Scientists need to develop their introspective abilities to a high degree, because they are constantly required to formalize and operationalize every bit of the functionality needed in the programs they develop. Even if it's not based on the huge amounts of personal and peer-generated research as ACT-R and similar architectures are, it is consistent with all results I've gotten or read about so far (as far as I can tell). So I like to call MneMonic a cognitive architecture. For my BA thesis in Computer Science, I deployed MneMonic with text-only inputs, in the shoes of Commander Data (the interface sported an image of Brent Spinner in his role as a Starfleet android). Most of the research I did for it was on all the types of NN architectures I could lay my hands and eyes on. Still, I was not satisfied with their performance; the darn things forget all they learn in one session if you train them for something else. So I added a symbolic layer (I was calling it semiotic :), by using ATMs. The creation of excitatory and inhibitory links within the network was handled by the neural methods, piecemeal, as the input goes by, with a completely unrealistic window of activation - but I learned about that later, in my Psychology classes. For training, I passed through it lots of fiction from the Gutemberg archives. To my chagrin, it didn't work originally, as it was designed, starting with no network and only with the mechanisms in place. I had to code in a generative grammar (put together by my English Language Professor). Then it could converse with a user with the keyboard as input, or check and attempt to correct the grammar from new inputs. It was really, really bad with ambiguous phrases, and not especially coherent in its answers. Which is strong evidence for the nativist hypothesis, but I blame my implementation for it, and the lack of knowledge about cognitive processes.
Later on, during my MS studies in Cognitive Science, I kept finding terminology, research and results which were consistent with the intuitions I had while working on MneMonic. But due to the high pace of study and to the enormous fields I was supposed to study (philosophy, psychology, linguistics, [neuro][electro]physiology, cognitive modeling), I never actually worked for more than a week in a row on MneMonic itself. And when I did, it was in order to put together presentations for the student seminars or for my advisor, until I figured out I'll never have time to even approach a good treatment of MneMonic in the half a year I was supposed to do my Thesis in. So I applied to my current PhD program. And for my MS Thesis, I developed some more (and ran a study on), another project of mine, MonDoc.
Now, I may be able to do that work this time around, as outlined in my current dissertation proposal, or I may not. It all depends on the interaction of bureaucracy and scientific curiosity within my advisor(s).
More stuff I gotta check out:
Stay tuned or let me know any suggestions, funding or encouragement at email@example.com
|[ Home ][ Current Projects ][ Portfolio ][ Pastime ][ Pseudonyms ][ CV ]|