The paper discusses an interesting architecture of intelligent programs that does not rely on a representation of the external world (etc. read the paper). Which is interesting… BUT more interesting I found the discussion of how many AI researchers deceive themselves by using overly simple scenarios for their experience, either virtual words such as box-world, or even simplified versions of the real world, with matte walls, colour coded object etc. (There is a related, but kind of opposite argument made by Hofstadter, which I wont go into here)
Brooks argues that the only way to develop intelligent systems is
[…] to build completely autonomous mobile agents that co-exist in the world with humans, and re seen by those humans as intelligent beings in their own right.
These claims were made in 1987 and in the last 20 years the internet has brought us a completely new "real" world where many people spend hours every day. We now have a complex, and for any human use practically infinite world of things to interact with.
The internet removed one layer of the difficulties of perception: the need to interpret the not very well understood, noisy, high-bandwidth channels of sound and vision was removed. Instead an intelligent "creature" (to borrow Brooks' terminology) can work on textual documents, whereas still noisy, at least the understanding of natural language seems slightly easier that image understanding.
Perhaps with the advent of the Semantic Web the life of AI researchers has become much easier again. What previously was an unrealistic "abstraction" of the problem, i.e. ignoring the text-parsing and understanding problems, claiming that someone else would solve this and that our work takes the already extracted semantic content as input, has now become quite a reasonable argument.
I suppose what I'm saying at the end of the day is "Thank you Semantic Web people", you have made it possible for me to work on what I grandiosely call an "intelligent" system, without having to solve ALL the problems!