The general issue here is Knowledge Representation versus Environment Affordances:
p.3 bottom: your analogy with the map of the unknown territory is flawed, indeed goes against your point. Landmarks from the map are compared with observed landmarks, therefore structure is used in equal parts within the agent and the environment. More in the environment actually, because there can be lots of similar landmarks to check.
I suggest an analogy with the philosophical process of creating thought experiments.
p.5 first word should be "seen" not "looked"
p.8 second paragraph. It is a matter of opinion whether the returned signal is interpreted or not. After all, the robots look for specific patterns to react to, as part of their engineering design. Just as the structure designer builds representations in the agent, so does the engineer in a more distributed way (sensors and motors coupled in predefined ways)
p.8 last paragraph is irrelevant: "Web servers don’t exist in the world a priori, while rocks and walls do". For the simulated world, it does not matter whether the relations represented really exist in some postulated physical world:
p.10 par.1. what you call social situatedness and I call abstract relativity is not necessarily only second order. It's any higher order than one, because agents can share representations many levels above the physical ones. And in fact it always amazes me how little of the world we experience first-hand. Most of our knowledge of the world is higher order, coming from media such as books, maps, dictionaries, encyclopaedia, film, tv, etc.
p.10-11. No big difference between approaches 1 and 2 on one hand and 3 and 4 on the other. It's false to say that a device which can fly, open doors, detect curbs, etc. does not exploit the structure in the environment during design. To fly, it needs a specific environment, to open doors it has to find them, to detect curbs means directly using the environment. Wake up :)
Also it's very inefficient to expect the environment to react to a minority of its users. Building massive, complicated hardware needs lots of maintenance and restricts the access of the agent only to these highly responsive environments.
I see what you're trying to show, but what's easy in software environments is not easy in physical ones. So I suggest you change the examples to reflect this.
For example, your third approach is currently in use: they make roads for cars, in order to allow them to move faster and with less wear. But ask any government how much money goes into this type of relatively low tech Agent-Environment Co-design.
p.16 bottom. I suggest replacing "The cards 'tell' the player what he needs to do, he doesn't have to remember it." with "The cards 'help' the player remember what he needs to do, they facilitate choosing among possible strategies."
p.17 second par. Spooky. Something in the use of the term signal both as the sound made by birds and as a representational structure doesn't fit well with me... see my second general comment above.
p.17 bottom. Are you saying Active Design is an architecture? Methinks there's a long way from a methodology to an architecture...
p.21 listing. "Only active structures can be self-revealing." I disagree. It's not a matter of being active, but a matter of relevance of the medium used for representing. All you've been doing over the last several pages was give examples of matching and unmatching media: a phone can't read a bar code from the depths of a suitcase, but it can receive an RF signal. So I guess you may want to define self-revealing as 'self-describing within a commonly accessible medium.'
p.22 first par. "But once it takes control of a particular set of actions for a particular set of agents, the herald stops being robust enough to cater to a wide set of situations." Not true. The policy being heralded can be complete, catering to all forms of devices, and each device could respond to the part of the protocol that relates to its own kind. The key word is type. Then the option you consider in the next paragraph, "by all possible types of agents," becomes viable.
p.24 fifth bullet. "and thereby make itself into a distributed agent". I have no idea why you identify representational structure within an agent with the agent itself, but maybe you're talking in abstract terms.
|[ Current Projects ][ Current Projects ][ Portfolio ][ Pastime ][ Pseudonyms ][ CV ]|