FOUNDATIONS OF COGNITIVE SCIENCE - of Kai-Uwe Carstensen
FOUNDATIONS OF COGNITIVE SCIENCE - of Kai-Uwe Carstensen
FOUNDATIONS OF COGNITIVE SCIENCE - of Kai-Uwe Carstensen
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
<strong>FOUNDATIONS</strong> <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong><br />
<strong>Kai</strong>-<strong>Uwe</strong> <strong>Carstensen</strong><br />
<strong>Kai</strong>.<strong>Carstensen</strong>@CogSci.Uni-Osnabrueck.de<br />
University <strong>of</strong> Osnabrück Winter semester 1998/99<br />
Information about the lecture can be found at:<br />
http://www.cogsci.uni-osnabrueck.de/lectures/foundations/Foundationsscript.zip<br />
THE IMPORTANCE <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong><br />
This lecture introduces Cognitive Science, a new (inter-)discipline whose centerpiece is the investigation <strong>of</strong><br />
cognition. Cognitive Science brings together evidence from various disciplines about how our mind and<br />
brain work, integrating the knowledge gained from different approaches to cognition. Cognitive Science<br />
views the mind as an information processing system. The results <strong>of</strong> empirical observation, theoretical<br />
analysis and computational modelling <strong>of</strong> mental activity are supposed to have a tremendous impact on future<br />
information technology:<br />
"I believe that the discovery by cognitive science and artificial intelligence <strong>of</strong> the technical challenges<br />
overcome by our mundane mental activity is one <strong>of</strong> the great revelations <strong>of</strong> science, an awakening <strong>of</strong> the<br />
imagination comparable to learning that the universe is made up <strong>of</strong> billions <strong>of</strong> galaxies or that a drop <strong>of</strong> pond<br />
water teems with microscopic life"<br />
(Steven Pinker, How the mind works, p. 4).<br />
MOTIVATION<br />
You may ask why a new discipline is needed for the study <strong>of</strong> cognition and why establishing this discipline<br />
should be regarded as a good idea. Here are a few reasons:<br />
Eternal questions are still unanswered<br />
Some questions about our mind have been asked for thousands <strong>of</strong> years but still haven´t been given satisfying<br />
answers: How do we think, understand, speak, learn …; what is the relation <strong>of</strong> body and mind; what is<br />
consciousness? These questions have been investigated in different, loosely related disciplines, but theoretical<br />
progress has been hindered by reformulations and reinterpretations <strong>of</strong> questions and (partial) answers. The<br />
new joint effort made in Cognitive Science can be expected to give answers to them by simply bringing<br />
together evidence from different viewpoints.<br />
Clear old problems recognized<br />
Perhaps the most famous theoretical problem is the "Frame problem" (McCarthy/Hayes 1969) well known<br />
in the field <strong>of</strong> Artificial Intelligence: The problem <strong>of</strong> specifying what exactly belongs to a certain piece <strong>of</strong><br />
knowledge or, e.g., what is relevant for planning and performing a certain action. This is most clearly<br />
exemplified in Dennett´s story <strong>of</strong> the little robot.<br />
Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for<br />
intself. One day its designers arranged for it to learn that its spare battery, its precious enerfy<br />
supply, was locked in a room with a time bomb set to go <strong>of</strong>f soon. R1 located the room, and the<br />
key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and<br />
the battery was on the wagon, and R1 hypothesized that a certain action which it called<br />
PULLOUT(WAGON,ROOM) would result in the battery being removed from the room.<br />
Straighaway it acted, and did succeed in getting the battery out <strong>of</strong> the room before the bomb went<br />
<strong>of</strong>f. Unfortunately, however, the bomb was also on the wagon. R1 knew that the bomb was on the<br />
wagon in the room, but didn't realize that pulling the wagon would bring the bomb out along with<br />
the battery. Poor R1 had missed that obvious implication <strong>of</strong> its planned act.<br />
Back to the drawing board. "The solution is obvious," said the designers. "Our next robot must be<br />
made to recognize not just the intended implications <strong>of</strong> its acts, but also the implications about<br />
their side-effects, by deducing these implications from the descriptions it uses in formulating its<br />
[ 1 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
plans." They called their next model the robot-deducer R1D1. They placed R1D1 in much the<br />
same predicament the R1 had succombed to, and as it too hit upon the idea <strong>of</strong><br />
PULLOUT(WAGON,ROOM), it bafan, as designed to consider the implications <strong>of</strong> such a course<br />
<strong>of</strong> action. It had just finished deducing that pulling the wagon out <strong>of</strong> the room would not change<br />
the color <strong>of</strong> the room's walls, and was embarking on a pro<strong>of</strong> <strong>of</strong> the further implication that pulling<br />
the wagon out would cause its wheels to turn more revolutions than there were wheels on the<br />
wagon - when the bomb went <strong>of</strong>f.<br />
Back to the drawing board. "We must teach it the difference between relevant implications and<br />
irrelevant implications," said the designers. "And teach it to ignore the irrelevant ones." So they<br />
developed a method <strong>of</strong> tagging implications as either relevant or irrelevant to the project at hand,<br />
and installed the method in their net model, the robot-relevant-deducer, R2D1. When they<br />
subjected R2D1 to the test that had so unequivocally selected its predecessors for extinction, they<br />
were surprised to find it sitting, Hamlet-like, outside the room containing the bomb, the native<br />
hue <strong>of</strong> its resolution sicklied o'er with the pale case <strong>of</strong> thought, as Shakespeare has aptly put it.<br />
"DO something!" its creators yelled.<br />
"I am," it replied. "I'm busily ignoring some thousands <strong>of</strong> implications I have determined to be<br />
irrelevant. Just as soon as I find an irrelevant implication, I put it on the list <strong>of</strong> those I must<br />
ignore, and..." the bomb went <strong>of</strong>f.<br />
from: Dennett, D., Cognitive Wheels: The Frame Problem in AI. In Minds, Machines, and<br />
Evolution. C. Hookway, ed. Pp. 128-151. Cambridge University Press, 1984.<br />
The robot´s failure can be blamed to two problematic aspects: First, the assumption that all problems can/have<br />
to be solved by conscious, rational thinking, that is, by the serial, step-by-step application <strong>of</strong> rules to arrive at<br />
a solution [an extreme version <strong>of</strong> this is also known as Ryle´s regress: the fallacious assumption that every<br />
conscious mental act needs deliberate planning (as well as its parts, its parts´ parts …)]. Second, seriality<br />
itself: although for small domains it is still a good idea to serially check the possibilities <strong>of</strong> what should/could<br />
be done in a certain situation (just look at the success <strong>of</strong> current chess computers), such serial thinking leads<br />
to a bottleneck in performance and is, in general, grossly inadequate both for biological (massive parallelity in<br />
the brain) and practical reasons.<br />
New insights gained<br />
The pop-out phenomenon and parallel computation: Why is it sometimes difficult to identify certain elements<br />
in our visual field (e.g., the "T" in a.) while sometimes certain elements can not be missed (they "pop up" in<br />
front <strong>of</strong> our "mental eye" as in b.)? We now know that there is a stage <strong>of</strong> parallel processing <strong>of</strong> different<br />
aspects <strong>of</strong> the visual field that must be distinguished from a stage <strong>of</strong> serial processing. An important aspect <strong>of</strong><br />
our visual cognition is therefore a clever division <strong>of</strong> labour between "dumb" parallel processing (highlighting<br />
"interesting" locations in the visual field) and attentive, serial processing <strong>of</strong> such locations (identifying what is<br />
there, a kind <strong>of</strong> "looking at" these locations). [Mind the metaphors used here simply for abbreviation and<br />
illustration: Of course there is not a little person in our head (a homunculus) looking at our visual field (this is<br />
just what Ryle criticised)! In a sense, Cognitive Science is about replacing such inadequate metaphors by<br />
mechanisms.]<br />
Do you see the outstanding objects in both pictures? In the first one, it is not so easy to see the "T", because<br />
there are many vertical and horizontal lines in the display, making their combination less salient.<br />
[ 2 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
New technologies invented<br />
For a long time, the investigation <strong>of</strong> how the mind works has been in the hands <strong>of</strong> philosophers and<br />
psychologists. Now we do not only use computers for modelling the mind´s operations (or for building<br />
artificially intelligent systems), but with new neuroimaging techniques and devices (e.g., PET scans), we are<br />
even beginning to be able to "see the mind at work".<br />
Innovative methods found<br />
Artificial Neural Networks and Connectionism: In the past years, new methods <strong>of</strong> computing have been<br />
developped which are more or less inspired by the information processing in our brain. Some old and difficult<br />
problems (like face or voice recognition) can be handled much better with these methods.<br />
SYLLABUS<br />
· What is Cognitive Science? Basics. History.<br />
· The architecture <strong>of</strong> mind/cognition: CRUM (computational-representational understanding <strong>of</strong> the<br />
mind)<br />
· Non-classical approaches to cognition<br />
· The brain and its structure<br />
· The relation <strong>of</strong> mind and brain<br />
· Language<br />
· Artificial intelligence<br />
· Philosophy <strong>of</strong> mind<br />
· Visual cognition/ Imagery<br />
· Attention<br />
· Consciousness<br />
„COGNITION“<br />
Derived from lat. cognoscere, gr. gignoskein (perceive, (get to) know)<br />
Introduced in modern psychology in demarcation to behaviorism<br />
· cognitive psychology (as a new branch <strong>of</strong> psychology established in the sixties)<br />
Cognition comprises, e.g., the functions <strong>of</strong> perception, imagination, planning, problem solving,<br />
remembering, recognizing, learning, language generation and understanding<br />
[ 3 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
WHAT IS THE AIM <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong>?<br />
-> "reverse engineering" (How did nature do it?)<br />
rather than or before "engineering"<br />
Basic research questions (from Eckhardt 1995, What is Cognitive Science ?):<br />
• For the normal, typical adult, what is the capacity to ...?<br />
• In virtue <strong>of</strong> what does a normal, typical adult have the capacity to ...?<br />
• How does a normal, typical adult typically ...?<br />
• How does the capacity to … <strong>of</strong> the normal, typical adult interact with the rest <strong>of</strong> his or her cognitive<br />
capacities?<br />
CONTROVERSIAL AIMS<br />
· Does Cognitive Science include the study <strong>of</strong> the cognition (or intelligence) <strong>of</strong> man-made-computers?<br />
· Does Cognitive Science include the study <strong>of</strong> the cognition (or intelligence) <strong>of</strong> non-human animals?<br />
· Does Cognitive Science include the study <strong>of</strong> human mental phenomena other than cognition (such as<br />
emotions)?<br />
In my opinion, <strong>of</strong> course!<br />
THE RESEARCH FRAMEWORK <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong><br />
· The domain <strong>of</strong> Cognitive Science consists <strong>of</strong> the (human) cognitive capacities, which have a number <strong>of</strong><br />
properties, among them<br />
· Intentionality (aboutness)<br />
Ð Productivity (i.e., to be used in novel ways)<br />
· The capacities make up a system according to which answers to the basic questions can be found<br />
· The (human,) cognitive mind/brain is a computational and representational device; hence, cognitive<br />
capacities consist <strong>of</strong> a system <strong>of</strong> computational and representational capacities<br />
REVISED RESEARCH QUESTIONS<br />
• For the normal, typical adult, what precisely is the information-processing function that underlies the<br />
capacity to ...?<br />
• When a normal, typical adult has the capacity to …, in virtue <strong>of</strong> what computational and<br />
representational resources is he or she able to compute the function ... ?<br />
[ 4 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
• When a normal, typical adult typically exercises his or her capacity to …, how is the function<br />
computed?<br />
• How does the information-processing function to … <strong>of</strong> the normal, typical adult interact with the rest<br />
<strong>of</strong> his or her information-processing functions?<br />
ASPECTS <strong>OF</strong> THE INVOLVED DISCIPLINES<br />
Philosophy:<br />
· Strong influences on theories <strong>of</strong> mind, language, and knowledge<br />
· Syllogisms (Aristotle)<br />
· „thinking as mental calculation“-view (Hobbes, Leibniz)<br />
· Mind-body-problem (Descartes)<br />
· Ideas, categories (Plato, Kant, Wittgenstein)<br />
· Logic (Frege, Carnap)<br />
Neuroscience:<br />
Ð Lashley (ablation technique, lesions)<br />
Ð Hebb (connectivity <strong>of</strong> cell assemblies)<br />
Ð McCulloch (+ Pitts) (logical abstractions <strong>of</strong> biological neurons)<br />
Ð Hubel/Wiesel (single-cell recordings)<br />
Psychology:<br />
Ð Information processing replaces behaviourism as the dominant paradigm (-> “cognitive" psych., -><br />
George Miller)<br />
Ð Thinking as symbol manipulation, computer simulation as a method for the development <strong>of</strong> theories<br />
<strong>of</strong> the functioning <strong>of</strong> the mind<br />
Linguistics:<br />
· Generative grammar replaces structural linguistics (Noam Chomsky). (+ psycho-, computational-L.)<br />
Computer Science/Artificial Intelligence:<br />
· Symbolic computation replaces mere numerical calculation as main focus <strong>of</strong> the discipline (symbolic<br />
programming language LISP, ->McCarthy)<br />
· Programming in Logic (PROLOG)<br />
· Parallel Distributed Processing (PDP, ->Rumelhart/McClelland)<br />
(Cognitive) Anthropology:<br />
· Social/cultural variation <strong>of</strong> cognitive functions<br />
Mathematics (as a background discipline):<br />
· Establishment <strong>of</strong> basic formal concepts and formalisms<br />
· Mathematical logic (Frege, Russel/Whitehead)<br />
· Theory <strong>of</strong> computation (Turing, Church)<br />
· Formal semantics <strong>of</strong> natural language (Montague)<br />
THE RISE <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong><br />
• Hixon-Symposium (1948, California Institute <strong>of</strong> Technology) („Wie steuert das Nervensystem<br />
Verhalten?“): First counter-reaction against behaviorism (-> Skinner)<br />
participants: Lashley (Psych.), von Neumann (Math.), McCulloch (Neurophys.)<br />
• Symposium on Information Theory (1956)<br />
participants : George Miller (Psych.), Allan Newell/Herbert Simon (Inf.), Noam Chomsky (Ling.)<br />
• Meeting at the Dartmouth College (1956)<br />
participants : McCarthy, Minsky, Newell, Simon<br />
• Chomsky: review <strong>of</strong> Skinners „Verbal Behaviour“ (1959)<br />
• Establishment <strong>of</strong> the Center for Cognitive Studies in Harvard (Miller, Jerome Bruner)<br />
• Initiative <strong>of</strong> the Sloan Foundation (since 1975)<br />
• Establishment <strong>of</strong> the journal „Cognitive science“ (1977)<br />
• Establishment <strong>of</strong> the Cognitive Science Society (1979)<br />
[ 5 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
• Establishment <strong>of</strong> the journal „Kognitionswissenschaft“ (1990)<br />
• Establishment <strong>of</strong> the „Gesellschaft für Kognitionswissenschaft“ (GK) (1994)<br />
THE CLASSICAL VIEW <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong> I<br />
„What has brought the field into existence is a common research objective: to discover the representational<br />
and computational capacities <strong>of</strong> the mind and their structural and functional representation in the brain“<br />
(from a report <strong>of</strong> the Sloan foundation 1978)<br />
• Cognition is computation, the Turing-machine is the adequate most general construct for the<br />
description <strong>of</strong> computability<br />
• Cognitive processing is information processing<br />
• Information processing presupposes internal states, according to which outputs are computed with<br />
respect to given inputs<br />
• The basis <strong>of</strong> cognitive performance is the ability <strong>of</strong> cognitive systems to represent aspects <strong>of</strong> the<br />
environment relevant for action<br />
TURING MACHINE<br />
THE CLASSICAL VIEW <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong> II<br />
• Aspects <strong>of</strong> representation can be formally (i.e., mathematic-logically) described and processing<br />
aspects can be modelled by general or specific inference mechanisms<br />
• Cognition is described at several levels<br />
- knowledge level, symbol level (representation), implementational level (physical, biological<br />
realization)<br />
- computational level, algorithmic level, implementational level (point <strong>of</strong> view: top-down)<br />
• Cognitive systems are composed <strong>of</strong> various components (modules) which constitute an information<br />
processing system (IPS)<br />
[ 6 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Computational level: What information-processing problem is the system solving?<br />
Algorithmic level: What method is the system using to solve that informationprocessing<br />
problem?<br />
Implementational level: What physical properties are used to implement the (functional)<br />
method that the system uses to solve this information-processing<br />
problem?<br />
THE CLASSICAL VIEW <strong>OF</strong> <strong>COGNITIVE</strong> <strong>SCIENCE</strong> III<br />
• With the relations <strong>of</strong> their components, IPSs show a certain architecture, by which the behavior <strong>of</strong> the<br />
system is determined<br />
• IPSs are symbol processing systems which are somehow physically implemented<br />
(physical symbol systems hypothesis, Newell&Simon)<br />
"A physical symbol system consists <strong>of</strong> a set <strong>of</strong> entities, called symbols, which are physical<br />
patterns that can occur as components <strong>of</strong> another type <strong>of</strong> entity called an expression (or symbol<br />
structure). Thus, a symbol structure is composed <strong>of</strong> a number <strong>of</strong> instances (or tokens) <strong>of</strong> symbols<br />
related in some physical way (such as one token being next to another). At any instant <strong>of</strong> time<br />
the system will contain a collection <strong>of</strong> these symbol structures. Besides these structures, the<br />
system also contains a collection <strong>of</strong> processes that operate on expressions to produce other<br />
expressions: processes <strong>of</strong> creation, modification, reproduction and destruction.<br />
The Physical Symbol System Hypothesis.<br />
A physical symbol system has the necessary and sufficient means for general intelligent action.<br />
By necessary we mean that any system that exhibits intelligence will prove upon analysis to be a<br />
physical symbol system. By sufficient we mean that any physical symbol system <strong>of</strong> sufficient<br />
size can be organized further to exhibit general intelligence. By general intelligent action we<br />
wish to indicate the same scope <strong>of</strong> intelligence as we see in human action. "<br />
[from: Allen Newell and Herbert Simon, "Computer Science as Empirical Inquiry: Symbols and<br />
Search," Communications <strong>of</strong> the ACM, March 1976, pp 113-126.]<br />
• Cognition/mind can be described functionally (-> Functionalism), i.e. independently <strong>of</strong> its material<br />
realization<br />
Structure and behavior <strong>of</strong> IPSs can be reasonably put into analogy with the structure and behavior <strong>of</strong><br />
existing computers (Von-Neumann-computer)<br />
Computer-Metaphor<br />
clear distinction <strong>of</strong> machine and program<br />
WHEN IS A MACHINE INTELLIGENT? TURING´S IDEA<br />
Turing believed that by the end <strong>of</strong> the century, machines would be able to converse and think to<br />
the point where no one would bother debating the issue anymore. The only problem was trying<br />
to figure out how we could tell if a machine was intelligent.<br />
After all, mankind has tried to define intelligence for ages and had made little progress except to<br />
decide that whatever it is, we've got it.<br />
Turing came up with a elegant solution. He constructed the simple proposition that if human<br />
beings are intelligent, and if a machine can imitate a human, then the machine, too, would have<br />
to be considered intelligent.<br />
The test may seem stupendously simplistic, but given the abysmally circular discussions about<br />
the nature <strong>of</strong> consciousness, meaning and thought, Turing's idea was at least a solid point <strong>of</strong><br />
[ 7 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
reference that researchers could hold onto and discuss without getting lost in debates over how<br />
many bytes could dance on the head <strong>of</strong> pin.<br />
In Turing's proposal, a human interrogator sits in a room opposite a teletype or computer<br />
terminal. Hidden from the interrogator is a computer and another human being. The interrogator<br />
interviews both and tries to determine which is human and which is a computer. If the computer<br />
can fool the interrogator, it is deemed intelligent.<br />
Turing called this the "imitation game," although it is now universally known as the Turing Test.<br />
Given the simplicity <strong>of</strong> the Turing Test, it is surprising that for decades no one ever tried to<br />
actually conduct a Turing Test. Turing himself saw it as more a theoretical proposition to discuss<br />
the nature <strong>of</strong> machine intelligence. Over the years, perhaps researchers thought it obvious that no<br />
modern machine could yet pass the test.<br />
taken from:<br />
http://www-rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/History/MachineIntelligence1.html<br />
see also<br />
http://www.cs.bilkent.edu.tr/~psaygin/ttest.html (great page on Turing test)<br />
VON-NEUMANN MACHINE<br />
IN SHORT: CRUM (COMPUTATIONAL-REPRESENTATIONAL<br />
UNDERSTANDING <strong>OF</strong> THE MIND) -> THAGARD<br />
· Thinking can best be understood in terms <strong>of</strong><br />
· representational structures in the mind and<br />
· procedures operating on those structures<br />
· Analogy: Program Mind<br />
· data structures + algorithms = running programs<br />
· Mental representations + computational procedures = thinking<br />
[ 8 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
PROBLEMS <strong>OF</strong> THE CLASSICAL VIEW<br />
• The notion „representation“ is controversial<br />
• Symbol grounding is not ensured („symbol grounding problem“, -> Stevan Harnad)<br />
• The relation <strong>of</strong> mind and brain remains unclear<br />
• Inadequacy <strong>of</strong> the computer metaphor (massive parallel processing in the brain vs. sequential<br />
processing in the von Neumann-computer)<br />
• Top-down-conception<br />
• Missing content addressability<br />
There is a meanwhile classical while at the same time highly controversial critique <strong>of</strong> the functionalist<br />
(CRUM) view <strong>of</strong> cognition by the philosopher John R. Searle.<br />
Searle considers the following thought-experiment. Suppose that a person were given a set <strong>of</strong><br />
purely formal rules for manipulating Chinese symbols. The person does not speak or understand<br />
written Chinese, and so he does not know what the symbols mean, though he can distinguish<br />
them by their differing shapes. The rules do not tell him what the symbols mean: they simply<br />
state that if a symbol <strong>of</strong> a certain shape comes into the room, then he should write down a<br />
symbol with a certain other shape on a piece <strong>of</strong> paper. The rules also state which groups <strong>of</strong><br />
symbols can accompany one another, and in which order. The person sits in a room, and<br />
someone hands in a set <strong>of</strong> Chinese symbols. The person applies the rules, writes down a different<br />
set <strong>of</strong> Chinese symbols as specified by the rules on a sheet <strong>of</strong> paper, and hands the result to a<br />
person waiting outside the room. Unknown to the person in the room, the rules that he applies<br />
result in a grammatically correct conversation in Chinese. For example, if someone hands in a set<br />
<strong>of</strong> Chinese symbols that mean, "How do you feel today?" the symbols he writes down (as<br />
specified by the rules) mean, "Fine, thank you." In sum, the rules are a complete set <strong>of</strong><br />
instructions that might be implemented on a computer designed to engage in grammatically<br />
correct conversations in Chinese. The person in the room, however, does not know this. He does<br />
not understand Chinese.<br />
taken from: http://oit.iusb.edu/~lzynda/cogsci_lecture16.html<br />
[see also http://www.cas.ilstu.edu/PT/chinroom.htm]<br />
Thus, according to Searle, although artificial systems may (seem to) be intelligent (that is, show intelligent<br />
behaviour), they differ fundamentally from us as their intelligence is not based on embodied cognition [there<br />
are many, who do not subscribe to this view, however].<br />
[ 9 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
ALTERNATIVES/ ALTERNATIVE VIEWS<br />
• The architecture <strong>of</strong> the brain is relevant (neuroimaging, lesions)<br />
• connectionism/neuronal nets, ->subsymbolic representation and processing<br />
• ecological and social context are considered relevant (situated action)<br />
• „Natural Computation“ (i.e., „brain-like“ computation), Dana Ballard<br />
-> Important elements/concepts:<br />
• Minimal description length (programs as compact code)<br />
• Learning (developmental, behavioral, evolutionary)<br />
• Specialized architectures<br />
Minimum Description Length. The only answers that are practical to compute are those that<br />
retreat from the best answer in some sense. Answers can be just good, approximately correct, or<br />
correct to a certain probability. A universal metric for all these approximations is the minimum<br />
description length principle, which measures the cost <strong>of</strong> encoding regularity in data.<br />
Learning. Biological systems can amortize the cost <strong>of</strong> algorithms over their lifetime by learning<br />
from examples. Such learning can be seen as the ``on-line'' detection <strong>of</strong> regularity.<br />
Specialized Architectures. The massively parallel organization <strong>of</strong> the brain's neurons can<br />
compensate dramatically for their millisecond speeds. Particularly if the input is bounded at<br />
some fixed size, as it is with a retina or cochlea, then it can be very cost effective to design<br />
special-purpose architectures. In addition, to manage complexity the brain has evolved many<br />
hierachical structures.<br />
taken from: http://www.cs.rochester.edu/users/faculty/dana/comp.html<br />
GENERAL DISPUTES<br />
Ð Nativism vs. Empirism<br />
Ð Functionalism vs. eliminative Materialism<br />
Ð Associationism vs. Representationalism<br />
Ð Localizability <strong>of</strong> cognitive functions vs. distributedness <strong>of</strong> their representation<br />
Ð (Descartes, Gall, Broca/Wernicke, Hubel/Wiesel)<br />
Ð Modularity vs. Non-modularity <strong>of</strong> cognitive (sub)systems<br />
Ð Competence (ability to do sth.) vs. performance (behavior itself)<br />
Ð Weak psychological AI vs. strong psychological AI<br />
Strong AI aims at producing thinking machines, and assumes that implementing computer<br />
programs can be sufficient for producing a thinking thing.<br />
Weak AI aims at (a) producing machines that can perform complex tasks normally performed by<br />
intelligent beings, and/or (b) studying human cognitive processes by simulating them on a<br />
computer.<br />
Ð „G<strong>OF</strong>AI“ (Good Old Fashioned AI) vs. „New AI“<br />
<strong>COGNITIVE</strong> <strong>SCIENCE</strong> AS AN “INTER”-DISCIPLINE<br />
Disciplinary structure at universities: Necessary, but not sufficient<br />
-> Cognitive Science as an interdisciplinary venture<br />
Problem <strong>of</strong> pure interdisciplinary proceeding:<br />
"Dabei hat sich jedoch gezeigt, daß die Schwierigkeiten interdisziplinärer Zusam-menarbeit im<br />
kognitionswissenschaftlichen Bereich viel größer sind als angenommen. Eine fruchtbare<br />
Zusammenarbeit kommt in vielen Fällen erst nach Jahren zustande. Das größte Hindernis bei der<br />
gemeinsamen Arbeit sind Statusprobleme der beteiligten Wissenschaften, gefolgt von der<br />
weitgehenden Unkenntnis des Problembewußtseins, der Begriffssysteme, des Wissensstandes und des<br />
methodisch-praktischen Vorgehens in den jeweils anderen Disziplinen" (Roth 1996:10).<br />
[ 10 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Gardner:<br />
this is a „weak“ version <strong>of</strong> Cognitive Science "[das] kaum das Etikett einer bedeutenden neuen<br />
Wissenschaft [verdient]" (Gardner 1992:407)<br />
IS <strong>COGNITIVE</strong> <strong>SCIENCE</strong> A DISCIPLINE (IN A „STRONG“ SENSE)?<br />
If so, what is its content, what are its independent contributions?<br />
"Ich vertrete eine völlig andere, bislang noch umstrittene Meinung. Aus meiner [...] Sicht sind die<br />
wirklich wichtigen Grenzlinien in der Kognitionswissenschaft nicht die gleichen wie die der<br />
traditionellen Disziplinen, sondern vielmehr die zwischen speziellen kognitiven Inhalten. Darum<br />
sollten Wissenschaftler nach dem Kognitionsbereich definiert werden, der im Mittelpunkt ihrer Arbeit<br />
steht [...]"<br />
(Gardner 1992:407)<br />
Properties <strong>of</strong> the discipline :<br />
Ð Variety <strong>of</strong> methods (empirical, analytic, constructive)<br />
Ð accumulative as regards content<br />
Ð Information processing als paradigm<br />
Ð topic centered + multi-leveled<br />
TO WHAT EXTENT DOES <strong>COGNITIVE</strong> <strong>SCIENCE</strong> <strong>OF</strong>FER A GENERAL<br />
EDUCATION, TO WHAT EXTENT A JOB-RELATED ONE?<br />
-> both<br />
• generality <strong>of</strong> education achieved through multidisciplinary methodical abilities<br />
• specialization through thematic/topical deepening<br />
• education is basis for new cognitive information technology<br />
[ 11 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
ASPECTS <strong>OF</strong> COGNITION (CLASSICAL VIEW)<br />
<strong>COGNITIVE</strong> ARCHITECTURE<br />
¥ Basic assumption: IPSs are not (completely) homogeneously structured<br />
¥ IPSs consist <strong>of</strong> a set <strong>of</strong> interacting subsystems, which are functionally defined<br />
¥ Kinds <strong>of</strong> subsystems and their relations determine the „cognitive architecture“ <strong>of</strong> an IPS<br />
¥ Further assumption: cognitive architectures <strong>of</strong> normal, typical adults are roughly the same<br />
GLOBAL VIEW ON THE <strong>COGNITIVE</strong> ARCHITECTURE<br />
MEMORY<br />
¥ Long-term memory<br />
Ð declarative memory (-> Tulving)<br />
¥ semantic m. (concepts, “What is X”)<br />
¥ episodic (autobiographic) m. (events, “When did X happen”)<br />
Ð procedural memory (“How to do X”)<br />
¥ Short-term memory<br />
Ð Working memory (-> Baddeley)<br />
¥ central executive<br />
¥ specific subsystems (Visuo-spatial sketchpad, phonological loop)<br />
Ð Working memory (alternative view)<br />
¥ Set <strong>of</strong> active representations<br />
¥ Further distinction: explicit vs. implicit memory<br />
MEMORY REPRESENTATION<br />
¥ Memory<br />
Ð stores aspects <strong>of</strong> the perceived world<br />
Ð contains knowledge representation structures<br />
¥ For what?<br />
Ð Storing information makes it possible to learn, to recognize, categorize, plan, and reason<br />
Ð Storing information makes it possible to „represent the world“<br />
¥ How much is stored?<br />
Ð Long term memory: everything? (problem <strong>of</strong> forgetting)<br />
Ð Short term memory: little (7±2 “chunks”, Miller)<br />
¥ How long? -> unlimited?<br />
[ 12 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
REPRESENTATION<br />
¥ Representation =<br />
(Symbolic data)structures + access processes<br />
¥ Different kinds (formats) <strong>of</strong> representation<br />
Ð Logic<br />
Ð Semantic networks, concept hierarchies<br />
Ð Schemata<br />
Ð Rules<br />
Ð mental images/models<br />
¥ Processes operate on representations<br />
Ð mental functions are realized<br />
Ð behavior is produced<br />
THEREFORE (REPRESENTATIONAL STANCE)<br />
Why do people have a particular kind <strong>of</strong> intelligent behavior?<br />
Explanatory pattern:<br />
¥ People have mental representations<br />
¥ People have algorithmic processes that operate on those representations<br />
¥ The processes, applied to representations, produce the behavior<br />
LOGIC<br />
¥ Motivation:<br />
Ð Many ideas about representation and computation come from the logic tradition<br />
Ð Languages <strong>of</strong> logic are suitable as universal knowledge representation languages (?)<br />
Ð Logic is a suitable/ the single suitable means for the formal representation <strong>of</strong> deduction (?)<br />
Ð Logical deduction corresponds (somehow) to natural reasoning (??)<br />
VARIOUS LOGICS<br />
¥ Propositional logic (Aussagenlogik)<br />
Ð Elements: Propositions p, q...<br />
Ð Connectors & (and), v (or), -> (if-then)<br />
Ð Negation (~p)<br />
¥ The relation <strong>of</strong> language, logic, and deduction:<br />
Ð “If it rains (p), then it is wet (q)”: p -> q<br />
Ð Patterns <strong>of</strong> inference/inference rules:<br />
¥ Modus ponens (MP) (If p -> q and p, then deduce q)<br />
¥ Modus tollens(MT) (If p -> q and ~q, then deduce ~p)<br />
Ð Thus:<br />
¥ “It rains” allows to infer “It is wet” (via MP)<br />
¥ “It is not wet” allows to infer “It does not rain” (via MT)<br />
¥ First order predicate logic (FOPL)<br />
Ð (Prädikatenlogik erster Stufe, PL/1)<br />
Ð allows statemenst about (relations holding between) objects<br />
Ð Quantification:<br />
¥ “All men are mortal”: "x man(x) -> mortal(x)<br />
¥ “There exists a mortal man”: $x man(x) & mortal(x)<br />
¥ Further logics (extensions <strong>of</strong> FOPL) for individual aspects <strong>of</strong> cognition/knowledge, z.B.<br />
Ð Modal logics (“it is necessary/possible that”)<br />
Ð Deontic logics (“may/must”)<br />
Ð Default-logics (“typically”) /Non-monotonic logics<br />
Ð Spatial, temporal, sortal … logics<br />
[ 13 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Ð Many-valued logics, Fuzzy-Logic<br />
THEOREM PROVING: TYPES <strong>OF</strong> INFERENCE<br />
¥ Deduction (application <strong>of</strong> MP)<br />
Ð Is a monotonic procedure<br />
Ð Does not correspond to „learning“<br />
¥ Induction (inferring a universal statement from single observations)<br />
Ð Could be expected to correspond to „learning“<br />
Ð However: Induction is not correct, is “impossible”<br />
¥ Abduction (from p->q and q: infer p)<br />
Ð “It is wet” ---> “It has rained”<br />
Ð yields explanations<br />
Ð is defeasible<br />
<strong>COGNITIVE</strong> PLAUSIBILITY<br />
3 positions:<br />
Ð Formal logic is an important part <strong>of</strong> human reasoning (“mental logic”, Rips)<br />
Ð Formal logic is only distantly related to human reasoning, but that does not matter<br />
Ð Formal logic is only distantly related to human reasoning, so Cognitive Science should pursue other<br />
approaches (“mental models”, Johnson-Laird)<br />
THEREFORE (LOGICAL STANCE)<br />
Why do people make the inferences they do?<br />
Explanation:<br />
¥ People have representations similar to sentences in logic<br />
¥ People have deductive and inductive procedures that operate on those sentences<br />
¥ The deductive and inductive procedures, applied to the sentences, produce the inferences<br />
WASON’S SELECTION TASK<br />
¥ given: cards with letters on the one side and numbers on the other<br />
A B 4 7<br />
¥ Rule: If there is an A on the one side <strong>of</strong> a card, then there is a 4 on the other side<br />
¥ Question to subjects: Which card(s) must be turned over in order to verify the rule?<br />
¥ Result contradicts the assumption <strong>of</strong> a mental theorem prover<br />
RULES (PRODUCTIONS)<br />
¥ IF CONDITION THEN ACTION<br />
¥ ACTIONs are implication/deductions (A) or actions (B)<br />
¥ Logic Theorist (Newell, Shaw, Simon, 1958)<br />
Ð Modelling <strong>of</strong> human theorem proving in logic<br />
¥ GPS (General Problem solver, Newell/Simon 1972)<br />
Ð Generalization to human thinking and reasoning<br />
¥ ACT (“adaptive character <strong>of</strong> thought”, J. Anderson, 1983, 1993)<br />
¥ SOAR (“State, Operator, Apply and Result”, Newell,Laird, Rosenbloom 1993)<br />
DIFFERENCE TO LOGIC<br />
[ 14 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
¥ Simpler structure (condition-action)<br />
¥ Less strict: “p(x) -> q(x)” not necessarily interpreted as universal statement<br />
¥ Properties <strong>of</strong> the architecture<br />
Ð Working memory<br />
Ð Rule memory (procedural memory)<br />
Ð Control mechanism<br />
¥ Processing properties<br />
Ð E.g. “chunking” <strong>of</strong> rules<br />
PROBLEM SOLVING<br />
¥ Problem solving as search<br />
Ð Initial state<br />
Ð Goal state<br />
Ð Search space<br />
¥ Constraining the search space<br />
Ð by heuristic rules<br />
Ð by the constrained working memory<br />
¥ Different kinds <strong>of</strong> rule processing<br />
Ð forward<br />
Ð backward<br />
Ð bidirectional<br />
EXAMPLE: TIC-TAC-TOE<br />
¥ WIN: IF there is a row, colum, or diagonal with two <strong>of</strong> my pieces and a blank space THEN play the blank space to<br />
win<br />
¥ BLOCK: IF there is a row, colum, or diagonal with two <strong>of</strong> my opponent´s pieces and a blank space, THEN play the<br />
blank space to block the opponent<br />
¥ PLAY CENTER: IF the center is blank, THEN play the center<br />
¥ PLAY EMPTY CORNER: IF there is an empty corner, THEN move to an empty corner<br />
THEREFORE (RULE STANCE)<br />
Why do people have a particular kind <strong>of</strong> intelligent behavior?<br />
Explanation:<br />
¥ People have mental rules<br />
¥ People have procedures for using these rules to search a space <strong>of</strong> possible solutions, and procedures for<br />
generating new rules<br />
¥ Procedures for using and forming rules produce the behavior<br />
CONCEPTS<br />
¥ What is X?<br />
[ 15 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Ð Plato: Knowledge about X is innate (Nativism)<br />
Ð Locke/Hume: Knowledge about X is gained through experience (Empirism)<br />
Ð Kant: both is correct<br />
¥ Concepts are used for categorization and inference<br />
Ð Input I is INSTANCE <strong>OF</strong> concept C<br />
Ð If I is INSTANCE <strong>OF</strong> C then it has properties P1, P2, …<br />
The purpose <strong>of</strong> categorisation is to avoid the consequences <strong>of</strong> miscategorisation. Categorisation<br />
means responding differentially to certain KINDs (or classes, or categories) <strong>of</strong> input. The<br />
purpose <strong>of</strong> the data-reduction is to get out <strong>of</strong> the […] problem <strong>of</strong> unique instances with which<br />
you can do nothing. To reduce is to select what is invariant in the inputs and will reliably allow<br />
them to be categorised correctly, and to ignore the rest.<br />
This is not just "processing economy": It is part to what it means to learn and to generalise.<br />
Stevan Harnad in an email<br />
( http://cogsci.soton.ac.uk/~harnad/Hypermail/Foundations.Cognition/0061.html )<br />
[Just imagine that situation in which you have correctly categorized that animate thing with four legs coming<br />
running towards you as a bulldog. You should therefore be able to deduce the ‘may bite me‘ property and to<br />
draw the inference that it would be a good idea to run away or to hide. Luckily, there´s more to thinking and<br />
acting than only conscious deliberation…]<br />
¥ Approaches in AI /psychology<br />
Ð Networks <strong>of</strong> concepts (Semantic Nets (Quillian 1968))<br />
➥ spreading activation Aktivationsausbreitung<br />
Ð Schemata (Frames (Minsky (1974), scripts (Schank/Abelson 1977))<br />
SEMANTIC NETS<br />
¥ Relations between concepts<br />
¥ In general: links (<strong>of</strong> a certain type) between nodes (<strong>of</strong> a certain type)<br />
¥ Used for the representation <strong>of</strong> linguistic (semantic) knowledge<br />
¥ E.g., “Peter gives Mary a book” will be mapped on<br />
Sentences like the following can be ruled out as semantically ill-formed:<br />
*”The tower gives an ice to the idea”<br />
FRAMES<br />
Representation <strong>of</strong> (non-linguistic) knowledge (¹ semantic nets)<br />
Starting point:<br />
Emphasis on the structure <strong>of</strong> knowledge (as against unstructured sets <strong>of</strong> logical propositions)<br />
-> Schemata<br />
Frames as data structures for knowledge „chunks“<br />
Ð stereotypical situations (“What happens on a birthday party?")<br />
Ð prototypical objects (representatives <strong>of</strong> a category, cp. “How does a typical chair look like?")<br />
Idea: Frames serve to<br />
Ð categorize sensoric input (by “Matching”)<br />
[ 16 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Ð memorize and organize new information relative to old information<br />
Ð infer further information<br />
FRAMES II<br />
¥ Nets <strong>of</strong> nodes and links (similar to semantic nets)<br />
¥ Object centered, independent data structure<br />
¥ Hierarchy <strong>of</strong> frames: Inheritance<br />
Ð Taxonomies (kind hierarchies, IS-A-hierarchies, ontologies),<br />
Ð Partonomies (part hierarchies)<br />
FRAMES III<br />
¥ Entries in a frame ("Slots") may have certain values ("Fillers")<br />
also: pointers to sub-frames<br />
¥ Restrictions on fillers<br />
Ð Value is <strong>of</strong> a certain type<br />
Ð complex complex conditions on relations between fillers<br />
Ð “attached procedures” for computing filler values<br />
¥ If-needed<br />
¥ If-added<br />
¥ "Default"-entries as fillers<br />
¥ typical values to be assumed in the lack <strong>of</strong> better evidence<br />
¥ may be overwritten<br />
EXAMPLE: A COURSE FRAME<br />
Course<br />
A_kind_<strong>of</strong>: process (systematic series <strong>of</strong> actions)<br />
Instructor: _<br />
Room: _<br />
Meeting_time: _<br />
Requirements: exams, essays etc.<br />
Instances: Foundations_<strong>of</strong>_Cognitive_Science, Mathematics_I ...<br />
ASPECTS <strong>OF</strong> CONCEPT STRUCTURES<br />
Eleanor Rosch´s findings regarding structuring <strong>of</strong> concept knowledge:<br />
¥ Vertical dimension (inheritance)<br />
Ð Different levels <strong>of</strong> representation<br />
¥ Superordinate Level (furniture, tree)<br />
¥ Basic Level (chair, table, birch tree, oak tree)<br />
¥ Subordinate Level (stool, silver birch)<br />
Ð Basic Level is a distinguished level<br />
[The distinguished basic level may vary according to context and/or expertise; it is therefore sometimes called<br />
"entry" level.]<br />
¥ Horizontal dimension<br />
Ð Members <strong>of</strong> a category are not equal<br />
Ð There are no clear (necessary and sufficient) criteria for membership in a category<br />
Ð Prototypicality<br />
The traditional recipe <strong>of</strong> the Classic Theory is as follows: take a category and a set <strong>of</strong> defining features. The<br />
ingredients are:<br />
· categories are arbitrary;<br />
Ð categories have defining or critical attributes;<br />
Ð the intension (or set <strong>of</strong> attributes) determines the extension <strong>of</strong> a category (which items are members).<br />
[ 17 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
The idea is that categories are held together by defining features, which are together necessary and jointly<br />
sufficient, and that allinstances <strong>of</strong> a category possess the requisite defining features. Someone who is an<br />
unmarried adult male belongs to the category *bachelor* having the defining properties *unmarried*, *adult*,<br />
and *male* (Some people may wonder why these properties have asterisks surrounding them, i.e. why they<br />
are treated as categories. Well, properties can themselves be categories, e.g. we can include the Dutch soccer<br />
team in the category <strong>of</strong> #orangeness#. Nothing prevents properties from being categories.<br />
This way <strong>of</strong> conceptualising categorization has its problems. The deathblow for this theory was given by the<br />
research on color terms. Before this research it was assumed that categorization and naming <strong>of</strong> colors were<br />
arbitrary; the lines between colors were arbitrary, drawn as seen fit by a culture. Individuals <strong>of</strong> a culture<br />
would simply mirror these boundaries in their own classificatory and mnemonic behavior. However, Berlin<br />
and Kay [69] showed with a series <strong>of</strong> anthropological experiments that every culture shares the same focal<br />
areas for colors, irrespective <strong>of</strong> whether or not they have names for them. That is, they all agree about what is<br />
a "good blue" or a "poor yellow", even if a culture lacks labels for them. Moreover, this universalist view on<br />
basic color categorization has been supported by developmental [Bornstein, Kessen, & Weiskopf 76] and<br />
animal studies [Sandell, Gross, & Bornstein 79]. Berlin and Kay also found a presence <strong>of</strong> fixed order in the<br />
construction <strong>of</strong> color lexicons. If a culture has only two color terms, terms will code for black and white. If a<br />
third term is added, it will be red; fourth and fifth will be yellow and green; blue and brown will be the next<br />
pair; and purple, pink, orange, and gray will be the last four names. These findings seem to be a consequence<br />
<strong>of</strong> the structure <strong>of</strong> the nervous system <strong>of</strong> the primate [ibid.; Gardner 87]. Rosch [73] continued on these<br />
experiments and found that while the naming practices turned out to be incidental, the categorization <strong>of</strong> colors<br />
seemed to reflect the organization <strong>of</strong> the nervous system, not the structure <strong>of</strong> particular lexicon. She found this<br />
in other domains as well.<br />
Rosch discovered more properties when she investigated categorization in other domains, namely, categories<br />
seemed to be organized in a taxonomy. Categorization is performed at the basic level (table, dog, NBAplayer).<br />
Above the level <strong>of</strong> basic objects is the layer <strong>of</strong> superordinate objects, being generalizations <strong>of</strong> the<br />
basic ones like furniture, animal, basketball-player. Below the basic layer is the subordinate layer containing<br />
the specializations like diner-table, Mastiff, Shawn Kemp. The possible reasons that we and especially<br />
children categorize at the basic level is that members <strong>of</strong> basic level categories tend to look alike, that the basic<br />
categories have many describable features (cows give milk, say "moooh", and have yellow labels in their ears<br />
(in the Netherlands)), that our physical interactions with members <strong>of</strong> such a category are the same (we use<br />
different chairs the same way), and that it allows easy communication in general situations (compare "A man<br />
was bitten by a dog" with "John was bitten by Fifi" and "A person was bitten by an animal" (Wolters,<br />
personal communication)). […]<br />
Other findings questioned the claim that categories have defining features. There are few categories, from<br />
natural ones like birds to artificial ones like tables, that seem to obey the classical rule <strong>of</strong> finite lists <strong>of</strong> critical<br />
features [see also Gardner 85]. Even when categories do have definitions, rules seem less important than<br />
prototypes (see below). In the case <strong>of</strong> the bachelor: Henry is unmarried, adult, and male. Then he is a bachelor<br />
according to the classicists. If we know that he lives together with his girlfriend, then he is certainly not a<br />
bachelor to us, but he still fits the necessary and sufficient conditions to be a bachelor. There are many other<br />
exemplars fitting the conditions: a homo-sexual, a monkey. Because these examples are not similar to the<br />
prototypical bachelor, they do not seem to belong in the category, even though technically they do [Barsalou<br />
92].<br />
Wittgenstein [53] has given another and probably the most persuasive argument against Classical Theory. He<br />
showed that the concept *game* does not have unambiguous defining properties. In case <strong>of</strong> doubt try to<br />
characterize poker, tic-tac-toe, Russian roulette, chess, and trivial pursuit with defining features shared by all<br />
these instances. The concept *game* is kept together by a set <strong>of</strong> relationships and similarities. The principle is<br />
that each instance shares some subset <strong>of</strong> properties an instance <strong>of</strong> a category can have. Some, all, or none <strong>of</strong><br />
the properties <strong>of</strong> this subset may be shared by another instance. This is called a family resemblance<br />
[Wittgenstein 53].<br />
taken from:<br />
SCRIPTS<br />
http://www.soton.ac.uk/~coglab/coglab/Thesis/chptr3.html<br />
¥ Representation <strong>of</strong> situation specific knowledge (Schank, Abelson (1977): Scipts, Plans, Goals, and<br />
[ 18 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Understanding)<br />
¥ Example: What happens in a restaurant?<br />
Ð Entry conditions?<br />
Ð Entering etc.<br />
Ð Order meal<br />
Ð Eating<br />
Ð Paying<br />
Ð Exiting<br />
Ð Post conditions (results)?<br />
THEREFORE (CONCEPT STANCE)<br />
Why do people have a particular kind <strong>of</strong> intelligent behavior?<br />
Explanation:<br />
¥ People have a set <strong>of</strong> concepts, organized via slots that establish kind and part hierarchies and other<br />
associations.<br />
¥ People have a set <strong>of</strong> procedures for concept application, including spreading activation, matching, and<br />
inheritance.<br />
[ 19 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
¥ The procedures applied to the concepts produce the behavior.<br />
REPRESENTATION MEMORY<br />
¥ Necessary distinction:<br />
Ð Information stored in long term memory<br />
Ð Representation instantiated in working memory<br />
¥ Which information is filed in long term memory?<br />
Ð (only) Definitions (for concepts)?<br />
Ð (only) Abstractions/Schemata/Prototypes?<br />
Ð (only) Instances/Exemplars?<br />
¥ Propositional elements as basic cognitive building blocks ?<br />
¥ Pylyshyn [propositions are units <strong>of</strong> information]<br />
¥ “Language <strong>of</strong> thought” (Fodor) [propositions are units <strong>of</strong> representations]<br />
ANALOGIES AND CASES<br />
-> “Case based reasoning”<br />
Analogical Reasoning:<br />
¥ Starting point: Problem to be solved (target)<br />
¥ Remembering a similar problem (source) for which a solution is known<br />
¥ Structural comparison <strong>of</strong> source and target (putting their relevant components in correspondence with<br />
each other)<br />
¥ Adaptation <strong>of</strong> the source problem to produce a solution to the target problem<br />
¥ -> The tumor problem and the fortress story<br />
THEREFORE (ANALOGY STANCE)<br />
Why do people have a particular kind <strong>of</strong> intelligent behavior?<br />
Explanation:<br />
¥ People have verbal and visual representations <strong>of</strong> situations that can be used as cases or analogs<br />
¥ People have processes <strong>of</strong> retrieval, mapping, and adaptation that operate on those analogs<br />
¥ The analogical processes, applied to the representations <strong>of</strong> analogs, produce the behavior<br />
MENTAL IMAGES<br />
For answering questions like the following, we obviously do not deduce an answer from a set <strong>of</strong> premises in a<br />
logic-like reasoning scheme, but we build a mental picture <strong>of</strong> the scene ("pictorial representation") in which<br />
the answer can be "read <strong>of</strong>f":<br />
How many windows are there on the front <strong>of</strong> your house or apartment building?<br />
¥ Spatial aspects <strong>of</strong> pictorial representations and temporal aspects <strong>of</strong> their processing correspond to the<br />
represented aspects in the world (mental rotation, distance estimation)<br />
¥ Visual perception and mental imagery involve (in part) the same brain structures<br />
THEREFORE (IMAGERY STANCE)<br />
Why do people have a particular kind <strong>of</strong> intelligent behavior?<br />
Explanation:<br />
¥ People have visual images <strong>of</strong> situations<br />
¥ People have processes such as scanning and rotation that operate on those images<br />
¥ The processes for constructing and manipulating images produce the intelligent behavior<br />
[ 20 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
ASPECTS <strong>OF</strong> COGNITION (NON-CLASSICAL VIEW)<br />
CONNECTIONISM<br />
¥ Is also based on nets (networks) <strong>of</strong> nodes and links<br />
Ð Nodes: Cells, neurons, elements, units<br />
¥ However: (at least) the links do not have any meaning<br />
¥ Differentiation <strong>of</strong><br />
Ð localistic networks (nodes still have a clear meaning)<br />
Ð distributed networks (even nodes have no meaning)<br />
¥ Related terms:<br />
Ð parallel distributed processing<br />
Ð artificial neural networks<br />
Ð subsymbolic paradigm<br />
¥ Historical starting point: Networks <strong>of</strong> McCulloch-Pitts cells<br />
THE ARCHITECTURE <strong>OF</strong> CONNECTIONIST NETWORKS<br />
ELEMENTS <strong>OF</strong> NEUR(ON)AL NETWORKS<br />
¥ Cells with<br />
Ð State <strong>of</strong> activation (a i(t))<br />
Ð Activation function (f act)<br />
Given the threshold <strong>of</strong> activation q j<br />
¥ a j(t+1) = f act( a j(t), net j(t), q j )<br />
Ð Output function f out<br />
¥ o j = f out(a j)<br />
¥ Network <strong>of</strong> arcs (directed graph)<br />
Ð with weights w ij between the cells i and j<br />
¥ Propagation function<br />
Ð net j (t) = S o i (t) w ij<br />
¥ Learning rule<br />
[ 21 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
AN EXAMPLE: XOR-NETWORK<br />
TOPOLOGY (ARCHITECTURE) <strong>OF</strong> A NETWORK<br />
¥ Is crucial for what can be represented<br />
¥ In the second place: for what can be learned<br />
¥ Networks without hidden units<br />
Ð cannot represent XOR<br />
¥ Perceptrons (Rosenblatt 1958)<br />
¥ Critique: Minsky/Papert 1969<br />
Ð Are only suitable for linearly separable sets <strong>of</strong> inputs (¹XOR)<br />
THEORETICALLY POSSIBLE KINDS <strong>OF</strong> LEARNING<br />
IN NEURAL NETWORKS<br />
¥ Development <strong>of</strong> new connections<br />
¥ Extinction <strong>of</strong> existing connections<br />
¥ Modification <strong>of</strong> strength w ij <strong>of</strong> links<br />
¥ Modification <strong>of</strong> the threshold <strong>of</strong> neurons<br />
¥ Modification <strong>of</strong> activation, propagation or output function<br />
¥ Development <strong>of</strong> new cells<br />
¥ Deletion <strong>of</strong> cells<br />
[ 22 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
LEARNING RULES<br />
TYPES <strong>OF</strong> LEARNING<br />
¥ Supervised learning (Überwachtes Lernen)<br />
Ð Desired output (teaching input) is given<br />
¥ Reinforcement learning (Bestärkendes Lernen)<br />
Ð Feedback whether classification was corrrect<br />
¥ Unsupervised learning (Unüberwachtes Lernen)<br />
Ð Selforganisation<br />
Ð Kohonen-maps<br />
ATTRACTIVITY <strong>OF</strong> CONNECTIONISM:<br />
NEURONAL PLAUSIBILITY<br />
¥ Similarity to biologiccal neural networks<br />
Ð Cells as processing units<br />
Ð Parallelity <strong>of</strong> processing<br />
Ð Learning as change <strong>of</strong> weights (synaptic strengths)<br />
Ð relatively few serial steps <strong>of</strong> processing (~100/sec in the brain)<br />
¥ Distributed representation<br />
Ð in case <strong>of</strong> lesions: “graceful degradation”<br />
¥ But in addition: Abstraction from given biological realisation<br />
ATTRACTIVITY <strong>OF</strong> CONNECTIONISM:<br />
<strong>COGNITIVE</strong> PLAUSIBILITY<br />
¥ Approach for representing the Microstructure <strong>of</strong> cognition<br />
¥ General principles are <strong>of</strong>fered for the solution <strong>of</strong> basic problems in theories <strong>of</strong> cognition, e.g.<br />
Ð Categorization (e.g., taste, color, smell)<br />
Ð Prototypicality effects, context phenomena<br />
Ð Face recognition<br />
[ 23 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Ð Content addressable memory<br />
¥ Defeating the “brittleness” <strong>of</strong> systems in AI<br />
EXAMPLE: NETTALK<br />
(SEJNOWSKI/ROSENBERG 1987)<br />
¥ System mapping written onto spoken language<br />
¥ Feedforward-net<br />
Ð input layer with 203 units<br />
¥ divided into 7 groups (seven text symbols are coded)<br />
¥ each with 29 units (letters + blank + punctuation)<br />
Ð hidden layer with 80 units<br />
Ð output layer with 26 units<br />
¥ each unit codes a phonetic feature, syllable or intonation boundary<br />
Ð levelwise connectivity (18320 connections)<br />
¥ Supervised learning (via backpropagation)<br />
PERFORMANCE <strong>OF</strong> NETTALK<br />
¥ Text with 1024 words<br />
¥ 50 passes (ca. 250000 training pairs)<br />
¥ After that:<br />
Ð 95% correctness for elements <strong>of</strong> the same text<br />
Ð 78% correctness for elements <strong>of</strong> a new 439-word-text<br />
¥ distributed representation and graceful degradation<br />
THEREFORE (CONNECTIONIST STANCE)<br />
Why do people have a particular kind <strong>of</strong> intelligent behavior?<br />
Explanation:<br />
¥ People have representations that involve simple processing units linked to each other by excitatory and<br />
inhibitory connections.<br />
¥ People have processes that spread activation between the units via their connections, as well as processes<br />
for modifying the connections<br />
¥ Applying spreading activation and learning to the units produces the behavior<br />
ADVANTAGES <strong>OF</strong> PDP<br />
¥ Speed and power (the parallelity aspect)<br />
¥ Functional persistence/fault tolerance (graceful degradation in case <strong>of</strong> damage)<br />
¥ Content addressability<br />
¥ Recovering information <strong>of</strong> a whole given only partial information<br />
¥ Categorization: Classifying new things as instances <strong>of</strong> known types<br />
¥ Recognition: Classifying input as being a certain known object (despite noise)<br />
¥ The „pop-out“ <strong>of</strong> relevant information (as opposed to exhaustive search)<br />
DIFFERENCES <strong>OF</strong> CONNECTIONIST MODELS TO THE BIOLOGICAL<br />
REALITY<br />
¥ Much lower number <strong>of</strong> neurons (10 2 -10 4 vs. 10 11 )<br />
¥ Much lower number <strong>of</strong> connections (~10 5 connections in a net vs. 10 3 -10 4 synapses <strong>of</strong> a neuron)<br />
¥ Only one parameter <strong>of</strong> synaptic coupling (“weight” vs. different neuro transmitters)<br />
¥ Amplitude modulation vs. frequency modulation<br />
¥ Violation <strong>of</strong> the locality principle for synapses<br />
[ 24 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
¥ No modelling <strong>of</strong> the synaptic structure <strong>of</strong> dendrites<br />
¥ No exact modelling <strong>of</strong> the temporal processes in neural circuits<br />
¥ Only homogeneous networks have been theoretically investigated<br />
¥ No consideration <strong>of</strong> chemical influences on neighbouring neurons<br />
¥ Biologically implausible learning rules (e.g. supervised learning)<br />
[ 25 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
ASPECTS <strong>OF</strong> (<strong>COGNITIVE</strong>) NEURO<strong>SCIENCE</strong><br />
recommended reading: W.H. Calvin, G.A. Ojeman (1994), Conversations with Neil´s Brain.<br />
http://weber.u.washington.edu/~wcalvin/bk7/bk7.htm<br />
QUESTIONS<br />
“Cognition is what the brain does”<br />
(S. Pinker, How the mind works)<br />
Ð How is information processed in the brain?<br />
¥ Neurophysiology<br />
Ð What kind <strong>of</strong> different structures are there in the brain and how are they related?<br />
¥ Neuroanatomy<br />
Ð How/where are specific cognitive structures/processes realized in the brain?<br />
¥ Cognitive Neuroscience<br />
Ð In which way can behavior (or higher cognitive functions) be explained by reference to brain<br />
structures/processes?<br />
¥ Neuropsychology<br />
MOTIVATION: THE VIEW FROM OUTSIDE THE BRAIN<br />
(STORIES, PHENOMENA, AND DISEASES)<br />
· Phrenology (Gall, Spurzheim): The view that ~35 brain functions can be localized in specific brain<br />
regions and that there is a direct relation <strong>of</strong> the intensity <strong>of</strong> a function and anatomical properties <strong>of</strong> the<br />
corresponding region (such that personal characteristics <strong>of</strong> a person can be judged by outward appearance<br />
<strong>of</strong> the skull).<br />
· Hemispheric differences:<br />
· Language and Aphasia<br />
· Broca and Wernicke (regions): The observation that specific brain regions relevant for language are left-hemispheric<br />
· Words and their context: Why aphasics laugh at the president´s speech (Sacks): People whose left hemisphere is damaged (i.e.,<br />
aphasics) may not understand what is said but may still be able to evaluate a person´s speech with respect to the overall visua l<br />
and/or acoustic properties; on the other hand, people whose right hemisphere is damaged may not understand jokes any more<br />
· What people recognize/say/do when their corpus callosum is (partially) damaged<br />
· Confabulation<br />
· The woman whose right hand had to prevent herself from strangling herself with the left hand (Ramachandran)<br />
· Damage to the right hemisphere may lead to<br />
· not attending to the left side (unilateral neglect)<br />
· not recognizing their own deficits (anosognosia)<br />
· Processing <strong>of</strong> global/local information: Global information is processed preferably in the right hemisphere<br />
· Delis/Bihrle: the observation that down syndrome (known for disproportionately impaired language ability relative to other<br />
cognitive functions) correlates with preserved global processing<br />
· Frontal damage:<br />
· The story <strong>of</strong> Phineas Gage ( http://www.toto.com/butler/fam_tree/p_gage.htm )<br />
· Occipital damage:<br />
· Blindsight (reacting correct to visual stimuli despite cortical blindness due to corresponding damage)<br />
· Parietal damage:<br />
· Spatial Disorientation,<br />
· Attentional failures (neglect, simultanagnosia): not being able to attend to the left side or to more than one object<br />
· ideomotor apraxia (left parietal): not being able to imagine an action<br />
· Temporal damage:<br />
· Seeing but not recognizing:<br />
· Visual Agnosia (The man who mistook his wife for a hat, Sacks),<br />
· Seeing but not believing: Capgras (Ramachandran): visually recognized close relatives are thought to be impostors<br />
· Loss <strong>of</strong> declarative memory:<br />
· Korsakow syndrome<br />
· The patient H.M.<br />
· Degenerative diseases:<br />
· Huntington<br />
· Parkinson<br />
[ 26 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
· Alzheimer<br />
· Thunderstorms in the brain: Epilepsy<br />
· Specific phenomena (Ramachandran): Processing with respect to specific cognitive functions may be quite<br />
localized<br />
· The woman who died laughing<br />
· The man who can´t subtract three from seventeen<br />
· „Mirror“ neurons (which fire if an ape either performs or sees a certain gesture/manual action)<br />
· Mind blindedness (Baron-Cohen)<br />
LEVELS <strong>OF</strong> DESCRIPTION<br />
BASIC NEUROANATOMY<br />
Nervous system is divided in<br />
¥ central and peripheral NS<br />
¥ Central NS (Brain and spinal cord)<br />
Ð Endhirn (Telencephalon)<br />
¥ Subpallium (Septum, Basal ganglia/Striatum) and Pallium (Cortex)<br />
Ð Zwischenhirn (Diencephalon)<br />
¥ Epithalamus, Thalamus, Hypothalamus<br />
+ pituitary gland (Hirnanhangdrüse)<br />
Ð Mittelhirn (Mesencephalon)<br />
¥ Mittelhirndach/Tectum (superior and inferior colliculi), Tegmentum<br />
Ð Hinterhirn (Metencephalon)<br />
¥ Cerebellum (Kleinhirn)<br />
Ð Nachhirn (Myelencephalon)<br />
Ð Spinal cord (Rückenmark)<br />
SOME FUNCTIONAL ASPECTS<br />
¥ Spinal cord<br />
Ð Control <strong>of</strong> simple reflexes<br />
¥ Medulla<br />
Ð Regulation <strong>of</strong> heartbeat and breathing<br />
¥ Cerebellum<br />
Ð Coordinaton <strong>of</strong> fine muscle movement, balance<br />
¥ Pons (Brücke)<br />
Ð Relaying <strong>of</strong> information between cerebral cortex and cerebellum<br />
¥ Reticular formation(Formatio reticularis)<br />
Ð involved in the control <strong>of</strong> arousal and in the sleeping and waking cycles<br />
¥ Locus coeruleus: Destruction leads to coma<br />
¥ Raphe nuclei: Destruction leads to insomnia<br />
[ 27 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
SOME FUNCTIONAL ASPECTS II<br />
¥ Hypothalamus<br />
Ð Important for the autonomic nervous system and the endocrine system<br />
Ð Besides projecting to other structures, influences activity <strong>of</strong> other neurons via neuromodulatory processes<br />
(hormone secretion)<br />
¥ Thalamus<br />
Ð Relay station between sensory areas and primary cortical sensory receiving areas<br />
Ð Connections from and to basal ganglia, cerebellum, cortex, and medial temporal lobe<br />
Ð Important substructure: pulvinar (-> attentional processing)<br />
¥ Basal ganglia( Caudate nucleus, Putamen, Globus pallidus )<br />
Ð Planning <strong>of</strong> movements, control <strong>of</strong> action<br />
THE LIMBIC SYSTEM<br />
¥ Hippocampus<br />
¥ Amygdala (Mandelkern)<br />
¥ Cingulate gyrus<br />
¥ Septum<br />
¥ Mammillary bodies<br />
¥ Nucleus anterior thalami<br />
Important for emotional expression, valuation <strong>of</strong> events and (declarative) memory<br />
LOCATING STRUCTURES IN THE BRAIN: SPATIAL NOTIONS<br />
Orientations<br />
Ð Rostral / anterior = towards the nose or front end<br />
Ð Caudal /posterior = towards the tail in animals or towards the feet in humans<br />
Ð Dorsal = the back side<br />
Ð Ventral = the belly side<br />
Ð Lateral = toward the outside and away from the midline<br />
Ð Medial = toward the midline and away from the periphery<br />
Views<br />
Ð Sagittal view (from the side)<br />
Ð Transverse view (from above/below along the rostral/caudal dimension)<br />
Ð Horizontal (from above/below with respect to the ground, when the subject is standing)<br />
Ð Coronal view (from front/back with respect to the forebrain)<br />
IMPORTANT SPATIAL DIVISIONS<br />
Brain: Two halves, connected by the Corpus Callosum<br />
¥ Division <strong>of</strong> the cerebral cortex (Großhirnrinde ):<br />
Ð lateral Pallium (Paleocortex)<br />
Ð medial Pallium (Archicortex)<br />
Ð dorsal Pallium (Neocortex or Isocortex)<br />
¥ Division <strong>of</strong> the Isocortex:<br />
Ð Frontal lobe (Stirnlappen)<br />
Ð Parietal lobe (Scheitellappen)<br />
Ð Temporal lobe (Schläfenlappen)<br />
Ð Occipital lobe (Hinterhauptslappen)<br />
¥ Cytoarchitectonic divisions <strong>of</strong> the brain: 50 Brodman areas (Hirnrindenfelder)<br />
[ 28 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
ROUGH FUNCTIONAL DIVISIONS <strong>OF</strong> ISOCORTEX<br />
¥ Frontal lobe: Motor areas (planning and execution <strong>of</strong> movements)<br />
Ð prefrontal cortex: processing <strong>of</strong> higher cognitive functions<br />
¥ Parietal lobe: Somatosensory areas<br />
Ð attentional and spatial processing<br />
¥ Temporal lobe:<br />
Ð Auditory processing, object recognition<br />
¥ Occipital lobe: Visual processing<br />
Ð Ventral pathway to the temporal lobe („what“, object processing)<br />
Ð Dorsal pathway to the parietal lobe („where“, spatial processing)<br />
¥ Association cortex<br />
ISOCORTEX<br />
¥ Thickness:<br />
Ð 2-5mm (Æ 3mm)(grey matter)<br />
¥ Surface area:<br />
Ð 2200-2400 cm 2<br />
¥ Highly folded surface<br />
Ð Gyri (crowns <strong>of</strong> the folded tissue)<br />
Ð Sulci (infoldings)<br />
¥ Consists <strong>of</strong> six layers<br />
CELLS <strong>OF</strong> THE NERVOUS SYSTEM<br />
¥ Glia cells<br />
¥ Neurons<br />
Ð Consist <strong>of</strong><br />
¥ Dendritic tree (postsynaptic), in part with spines (Dornfortsätzen )<br />
¥ Soma (cell body )<br />
¥ Axon (presynaptic)<br />
Ð Up to 1m long<br />
Ð Contacts to up to 10000 other neurons via synapses (ø1000)<br />
Ð Transmission <strong>of</strong> action potentials (spikes) with ~100 m /sec<br />
Ð Different types<br />
Ð Different arrangement<br />
¥ in layers/lamina (e.g. in the cortex)<br />
Ð in mini columns (consisting <strong>of</strong> ca. 100 Neuronen), orthogonal to cortex surface, and macro columns (consisting <strong>of</strong> ca.<br />
300 mini columns)<br />
¥ in groups (-> nuclei)<br />
SYNAPSES<br />
¥ electrical synapses<br />
Ð electrical activity is transmitted directly (cells being very close to one another)<br />
Ð via plasma bridges (gap junctions)<br />
¥ chemical synapses<br />
Ð Presynapse, Postsynapse, synaptic cleft<br />
Ð transmission by transmitters<br />
¥ excitatory:<br />
Ð Acetylcholin, Noradrenalin, Serotonin, Dopamin, Glutamat<br />
¥ inhibitory:<br />
Ð Gamma-Aminobuttersäure (GABA), Glycin<br />
[ 29 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
EXPERIMENTAL TECHNIQUES FOR THE INVESTIGATION <strong>OF</strong> NEURAL<br />
STRUCTURES<br />
¥ Lesion studies<br />
Ð Destruction <strong>of</strong> selected cells and observation <strong>of</strong> degeneration in other areas as a result <strong>of</strong> the lesion<br />
¥ Labeling procedures<br />
Ð Observation <strong>of</strong> the transportation <strong>of</strong> labeled chemicals, i.e.,<br />
Ð <strong>of</strong> anterograde transportation (by autoradiographic tracing)<br />
Ð <strong>of</strong> retrograde transportation (from synaptic terminals back to the cell body)<br />
¥ Single cell recording<br />
Ð Observation whether or when a cell fires (by recording the electrical activity with microelectrodes)<br />
RESULTS <strong>OF</strong> MICROINVESTIGATIONS<br />
¥ Discovery <strong>of</strong> functional pathways<br />
Ð Transport <strong>of</strong> specific information through different brain areas<br />
¥ Discovery <strong>of</strong> columnar organization<br />
Ð Vertical organisation <strong>of</strong> cells in the cortex<br />
Ð Microcolums: Local processing <strong>of</strong>, e.g., orientation-specific or eye-specific information in the visual<br />
system<br />
¥ Discovery <strong>of</strong> topographic maps<br />
Ð Areas dedicated to the processing <strong>of</strong> specific information<br />
FUNCTIONAL NEUROIMAGING (FUNKTIONELLE BILDGEBENDE VERFAHREN):<br />
METABOLIC TECHNIQUES<br />
Emission Tomography<br />
Measuring brain activity by assessing regional cerebral blood flow (rCBF)<br />
¥ Single Photon Emission Tomography (SPECT)<br />
Ð Injection <strong>of</strong> HMPAO or inhaling air-xenon mixture<br />
¥ Positron Emission Tomography (PET)<br />
Ð Injection <strong>of</strong> labeled substances (e.g. water)<br />
¥ Functional Magnetic Resonance Imaging (fMRI)<br />
Ð Imaging <strong>of</strong> changes in the level <strong>of</strong> blood oxygenation<br />
FUNCTIONAL NEUROIMAGING:<br />
ELECTROPHYSIOLOGICAL TECHNIQUES<br />
¥ Electroencephalogram (EEG)<br />
Ð Detection <strong>of</strong> different rhythms <strong>of</strong> electrical activity (alpha, beta, theta etc.)<br />
¥ Event-Related Potentials (ERPs, Ereignis-korrelierte Potentiale)<br />
Ð Consist <strong>of</strong> a series <strong>of</strong> positive and negative changes from a baseline<br />
Ð Properties <strong>of</strong> the ERP (given the performance <strong>of</strong> a cognitive event) are analyzed<br />
¥ Probe-Evoked Potentials<br />
Ð How does performing a (cognitive) task influence a probe-dependent potential?<br />
¥ Magnetoencephalogram (MEG)<br />
Ð Detection <strong>of</strong> magnetic fields induced by active neurons , by a superconducting quantum interference device (SQUID)<br />
Ð Evoked fields (EF) or magnetic event-related fields (MEF)<br />
[ 30 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
BRAINPART FUNCTIONS<br />
¥ Hippocampus: Anchoring <strong>of</strong> new information in declarative memory<br />
¥ Amygdala: emotional tagging <strong>of</strong> events (somatic markers)<br />
¥ Occipital cortex: primary visual system<br />
¥ Temporal cortex: object recognition<br />
¥ Parietal cortex: spatial orientation, visuo-spatial attention<br />
¥ Prefrontal cortex:<br />
Ð voluntary planning and acting (~ lateral)<br />
Ð personality / social behavior (~ ventromedial)<br />
MALFUNCTIONS <strong>OF</strong> THE BRAIN<br />
¥ Agnosia: not being able to recognize<br />
Ð objects/forms, colors, faces (prosopagnosia)<br />
Ð the own deficits (anosognosia)<br />
Ð more than one object at a time (simultanagnosia )<br />
¥ Apraxia: not being able to act<br />
Ð Ideomotor apraxia: no voluntary movements (left parietal damage)<br />
¥ Anomia: not being able to name things<br />
¥ Alexia: not being able to read<br />
¥ Amusia: not being able to recognize melodies<br />
¥ Aphasia: language-related dysfunctions<br />
Ð Broca- (motoric)<br />
Ð Wernecke- (semantic)<br />
¥ Amnesia: not being able to memorize<br />
¥ Neglect: not being able to attend to<br />
EMOTIONS AND FEELINGS<br />
-> DAMASIO 1994, LEDOUX 1996<br />
¥ Emotions<br />
Ð Involve necessarily the amygdala and other parts <strong>of</strong> the limbic system<br />
Ð Can be induced by external stimuli or internal representations (e.g., fear vs. anxiety)<br />
Ð Correspond to activities in different emotional systems<br />
Ð Distinction <strong>of</strong> primary and secondary emotions<br />
¥ Primary e.: depend on limbic system circuitry (especially amygdala and anterior cingulate (e.g. stimulus-driven fear)<br />
¥ Secondary e.: based on systematic connections between categories <strong>of</strong> objects and situations, on the one hand, and primary<br />
emotions, on the other hand (involves also prefrontal and somatosensory cortices)<br />
¥ Emotional feelings<br />
Ð Correspond to being consciously aware <strong>of</strong> having an emotion<br />
¥ Somatic markers:<br />
Ð „gut feelings“ that result from emotion memory recall („how it feels“ if something happens)<br />
[ 31 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
VISUAL PERCEPTION AND ATTENTION<br />
VISION: THE PROBLEM<br />
¥ How do we attain constant visual experiences/knowledge about a structured three-dimensional world<br />
consisting <strong>of</strong> objects, given that<br />
Ð the input is solely the intensity <strong>of</strong> incoming light (luminance) in a certain range <strong>of</strong> wavelengths, projected onto a<br />
discontinuous two-dimensional surface (the retinae <strong>of</strong> both eyes)<br />
Ð the perceiving subject can be in motion<br />
Ð the perceived world may change continuously (e.g., moving objects)<br />
Ð a large number <strong>of</strong> goals relevant for behaviour must be fulfilled simultaneously<br />
Ð the ressources (memory, processing time) are limited<br />
ROOTS <strong>OF</strong> RESEARCH IN VISUAL PERCEPTION<br />
Helmholtz:<br />
depth clues<br />
Gestalt psychology (Wertheimer, Köhler, K<strong>of</strong>fka):<br />
Gestalt principles<br />
Gibson:<br />
ecological theory <strong>of</strong> vision, „direct perception“, extraction <strong>of</strong> invariants<br />
Cybernetics:<br />
Coupling <strong>of</strong> perception and action (control)<br />
Neisser:<br />
Preattentive/perceptual vs. attentive/cognitive<br />
Hubel/Wiesel:<br />
„detectors“, hypercolumns<br />
Marr:<br />
Computational vision<br />
Aloimonos:<br />
“Active vision“<br />
NEUROPHYSIOLOGICAL ASPECTS:<br />
FROM THE RETINA TO THE VISUAL CORTEX<br />
¥ Photoreceptors, two types, distributed differently on the retina:<br />
Ð Cones (Zäpfchen)<br />
¥ Predominate in and around fovea<br />
¥ Require more intensive light<br />
¥ Essential for color vision<br />
¥ Three types <strong>of</strong> cones („blue“ (short wavelengths), „green“ (middle), „red“ (long)) with overlapping areas <strong>of</strong> response<br />
distribution<br />
Ð Rods (Stäbchen)<br />
¥ Percentage <strong>of</strong> rods increases towards periphery <strong>of</strong> retina<br />
¥ Sensitive to low levels <strong>of</strong> stimulation<br />
¥ Slow regeneration <strong>of</strong> photopigments<br />
¥ Interneurons<br />
Ð Receive activations from receptors, excitatory or inhibitory influence on retinal ganglion cells<br />
NEUROPHYSIOLOGICAL ASPECTS:<br />
FROM THE RETINA TO THE VISUAL CORTEX II<br />
¥ Retinal ganglion cells with concentric receptive fields<br />
Ð [receptive field: area on the retina from which a cell receives activation]<br />
Ð Two types <strong>of</strong> cells<br />
¥ Parvocellular („color“ sensitive, high spatial resolution, less contrast sensitive, slow transmission <strong>of</strong> activation) [P system]<br />
¥ Magnocellular (large receptive fields) [M system]<br />
[ 32 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Ð Antagonistic behavior<br />
¥ On-center/<strong>of</strong>f-surround-cells (react to bright spots in the center)<br />
¥ Off-center/on-surround-cells (react to dark spots)<br />
¥ Red-green and yellow-blue color antagonism<br />
Ð Project via the optic nerve<br />
¥ To the lateral geniculate nucleus (LGN) [~90% <strong>of</strong> the fibers]<br />
¥ To the superior colliculus<br />
NEUROPHYSIOLOGICAL ASPECTS:<br />
FROM THE RETINA TO THE VISUAL CORTEX III<br />
¥ The nasal branch <strong>of</strong> each eyes´optic nerve projects to the contralateral brain half, crossing at the optic<br />
chiasm<br />
Ð Each half <strong>of</strong> the visual field is projected to the contralateral hemisphere!<br />
¥ Seitlicher Kniehöcker (corpus geniculatum laterale, LGN)<br />
Ð Six layers which alternately receive input from different eyes so that<br />
Ð corresponding receptive fields <strong>of</strong> both eyes are adjacent in LGN [here comes together what belongs together]<br />
Ð Fixed order <strong>of</strong> projection:<br />
¥ Ipsilateral retina -> layers 2, 3, 5<br />
¥ Contralateral retina -> layers 1, 4, 6<br />
¥ Layers 1 and 2 contain the magnocellular neurons<br />
Ð All layers project to layer 4 [<strong>of</strong> six cortical layers] <strong>of</strong> region V1 in primary visual cortex<br />
NEUROPHYSIOLOGICAL ASPECTS:<br />
FROM THE RETINA TO THE VISUAL CORTEX IV<br />
¥ Streifencortex (striate cortex, V1, area 17)<br />
Ð Blobs: regions <strong>of</strong> high activityin layer 2 and 3 <strong>of</strong> V1<br />
¥ P cells project to blobs<br />
¥ M cells project to complementary interblob regions<br />
Ð Hypercolumnar organisation:<br />
¥ Alternating adjacent left eye/right eye information in layer 4 [ocular dominance colums]<br />
¥ Within them, smaller colums <strong>of</strong> cells react to stimuli with a certain orientation (steps <strong>of</strong> 15°) [orientation colums]<br />
FURTHER ASPECTS <strong>OF</strong> VISUAL PERCEPTION<br />
¥ Main functions <strong>of</strong> other visual areas<br />
Ð V2: binokular reaction, V3: Form and depth, but not color<br />
Ð V4: Color perception, V5: Motion perception, V6: Form perception<br />
¥ Types <strong>of</strong> cells:<br />
¥ Simple cells: mostly reaction to edges (position specific, in LGN)<br />
¥ Complex cells: react to contours <strong>of</strong> edges (not position specific)<br />
¥ Hypercomplex cells: react to corners and angles<br />
¥ Object recognition is assumed to derive from collective activation in different areas („feature maps“)<br />
Ð Ensemble hypothesis vs. „grandmother cell hypothesis“<br />
¥ (the view that there are even more complex, „gnostic“ cells reacting to objects <strong>of</strong> specific types)<br />
ATTENTION<br />
"Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, <strong>of</strong> one out <strong>of</strong><br />
what seems several simultaneously possible objects or trains <strong>of</strong> thought".<br />
William James<br />
But:<br />
"In reviewing the literature on attention we were struck by several observations. One was a widespread reluctance<br />
to define attention. Another was the ease with which competing theories can accommodate the same empirical<br />
phenomena. A third observation was the consistent appeal to some intelligent force or agent in explanations <strong>of</strong><br />
attentional phenomena. [...] As a consequence , the more we read, the more bewildered we became"<br />
(Johnston/Dark 1986:43)<br />
[ 33 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
3 WAYS TO VIEW „ATTENTION“<br />
¥ As a process<br />
Ð managing the (limited) ressources needed for cognitive processes (-> control)<br />
Ð regulating the attentiveness to stimuli (-> alertness)<br />
Ð selecting relevant information (-> filter)<br />
¥ from different modalities (visuell vs. auditiv)<br />
¥ from different spatial locations<br />
¥ from different features (e.g., color vs. form)<br />
METAPHORS <strong>OF</strong> SELECTIVE ATTENTION<br />
¥ Attention as a<br />
Ð Spotlight (highlighting regions)<br />
Ð Zoom lens (focused vs. distributed attention)<br />
Ð Gradient (no sharp boundaries <strong>of</strong> the attended region)<br />
Ð Glue (attention is necessary for conjoining features)<br />
FEATURE INTEGRATION THEORY (TREISMAN)<br />
NETWORKS <strong>OF</strong> ATTENTION<br />
¥ Visuo-spatial orientating<br />
Ð DISENGAGE-mechanism (parietal Cortex)<br />
Ð MOVE-mechanism (Superior Colliculus)<br />
Ð ENGAGE (Pulvinar (Thalamus))<br />
¥ Executive Network<br />
(anterior cyngulate gyrus)<br />
¥ Vigilance network (right parietal and right frontal lobes)<br />
[ 34 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
APPENDIX:DER SCHRIFTSTELLER UND DIE VIELEN PROGRAMMIERER<br />
EIN MODERNES MÄRCHEN (?)<br />
erzŠhlt von <strong>Kai</strong>-<strong>Uwe</strong> <strong>Carstensen</strong>-Grimm<br />
Irgendwann einmal, in nicht allzu ferner Zukunft oder Vergangenheit, gibt es einen ganz<br />
au§ergewšhnlichen Romanschriftsteller. Au§ergewšhnlich deshalb, weil er nicht nur Romane verfa§t,<br />
sondern sich auch Gedanken darŸber macht, wie dies geschieht. Kein Mittel lŠ§t er unversucht, um<br />
herauszufinden, durch welche Prinzipien sein Tun geleitet wird. Schlie§lich versucht er sogar,<br />
Computerprogramme zu entwickeln, in denen sich seine natŸrliche schriftstellerische Kompetenz<br />
widerspiegeln soll, sprich, die selbst Romane schreiben kšnnen.<br />
Allerdings Ð er bleibt nicht der einzige, der das versucht. FŸr einige scheint es, aus verschiedenen<br />
GrŸnden, ebenfalls erstrebenswert, ein Schriftstellerei-Programm zu besitzen. Eines Tages gibt es solch ein<br />
Programm. Es ist ein Ideales Programm zum Schreiben (IPS), das seinen FŠhigkeiten nahe, in mancher<br />
Hinsicht sogar gleich kommt. Und so macht sich auch unser Schriftsteller auf, um es sich anzuschauen. Kaum<br />
hat er jedoch einen Blick auf Dokumentation und Programmcode des IPS geworfen, macht sich in ihm eine<br />
gro§e EnttŠuschung breit: Keine Kommentarzeile in dem ganzen Programm handelt von Schriftstellerei;<br />
stattdessen findet er nur Verweise auf BŸcher Ÿber Statistik und Differentialgleichungen. Und wie er genauer<br />
nachfragt, so bringt ihn die Unkenntnis der Programmierer Ÿber die vielfŠltigen Aspekte der Schriftstellerei<br />
ebenso zum Schaudern wie seine allmorgendliche kalte Dusche. "Wozu der analytische Aufwand", sagen sie<br />
nur, "unser Programm lernt alles von selbst."<br />
Insbesondere darŸber Šrgert sich unser Schriftsteller aber. Schlie§lich ist er der Fachmann fŸr<br />
Schriftstellerei. Wo genau stehen denn dort so wesentliche Wissensbausteine wie da§ man erst eine Idee<br />
haben mu§, bevor man Ÿberhaupt zu schreiben anfŠngt? Er selbst hingegen hat schon Programme entworfen,<br />
in denen Regeln wie "Wenn Du eine Idee hast, dann verfolge sie und arbeite einen Plot aus" eingebaut<br />
gewesen sind. Gro§e und komplexe Programme, die von vielen als "wundersam" angesehen worden sind (ob<br />
ihrer seltsamen Symbole wie z.B. '"' und '$'). All diese Programme haben aber nicht so richtig funktioniert.<br />
Weil er alles ganz genau hat hinschreiben mŸssen, gab es immer irgendetwas, das er nicht berŸcksichtigt hat.<br />
Vor allem ist ihm bis zuletzt nicht klar gewesen, was eigentlich eine 'Idee' ist, die man verfolgen mu§, und<br />
was ihr in seinen wundersamen Programmen entsprechen sollte.<br />
"Idee", denkt er eines Abends, den Kopf in die Ÿber seinem Laptop (auf dem das IPS lŠuft)<br />
verschrŠnkten Arme gelegt, "Idee, Idee, Idee". Und schon schlŠft er, mŸde vom langen Suchen und enttŠuscht<br />
Ÿber das, was er gefunden hat, ein. Die Gedanken verschwimmen, und plštzlich erscheint ihm eine Fee. "Du<br />
hast einen Wunsch frei", denkt er, ohne die Wšrter zu hšren. SelbstverstŠndlich wŸnscht er sich nichts<br />
sehnlicher als zu wissen, wer denn nun recht hat mit seiner Herangehensweise an die Schriftstellerei, er oder<br />
die Programmierer (so eine Frage kann wirklich nur in einem MŠrchen beantwortet werden).<br />
[ 35 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
Als er erwacht, flimmern die folgenden Zeilen Ÿber das LCD-Display seines Notebooks:<br />
"Du mu§t die Forschung anderer tolerieren"<br />
"Du mu§t die Forschung anderer respektieren"<br />
"Du mu§t bei der Bewertung Deiner oder anderer Forschung alle relevanten Aspekte in Betracht ziehen"<br />
"Du mu§t Latein lernen"<br />
"Nec vitae, nec universitatis, sed posteritatis scientiae pervestigamus"<br />
Erstaunt stellt er fest, da§ ihm všllig klar ist, was diese SŠtze bedeuten (und da§ sich aus ihnen die Antwort<br />
auf seine Frage ableiten lŠ§t). "Das wird nicht jedem so gehen", denkt er.<br />
"Ich war eindeutig intolerant, gegenŸber dem Erfolg eines nicht nur kŸnstlichen, sondern auch<br />
undurchsichtigen schriftstellernden Systems" (tatsŠchlich war es diese besondere Mischung, die in ihm ein<br />
urtŸmliches Unbehagen hervorgerufen hat).<br />
"Und ich habe die IPS-Schriftstellerei nicht als solche anerkannt und respektiert. Wie borniert",<br />
tadelt er sich selbst. Schlie§lich lŠuft das Programm doch. Und da§ es bei der KomplexitŠt des Gegenstands<br />
nicht trivial sein kann, ist fŸr ihn nun <strong>of</strong>fensichtlich. Wo war also das Problem? Es war ihm tatsŠchlich<br />
entfallen. Ach ja, die Bewertung.<br />
Da§ jeder seinen eigenen Ansatz zu ungunsten anderer bevorzugt und hervorhebt, glaubt er schon<br />
immer gewu§t zu haben. Deshalb auch sein Disput mit den Programmierern. Er erinnert sich der kontroversen<br />
Diskussionen, der beiderseitigen Arroganz und gegenseitigen Ignoranz. Schmunzelnd denkt er daran zurŸck,<br />
da§ er in den letzten Jahren zuerst insgeheim und dann immer <strong>of</strong>fener ausgelacht wurde, ebenso wie man ihm<br />
zuletzt mit einem hŠmischen Grinsen (was er zu Recht als "na, willst Du jetzt endlich in Rente gehen?"<br />
interpretierte) das IPS Ÿberreicht hat. "Da haben wir wohl alle etwas Ÿbersehen", fŠllt ihm angesichts der<br />
Tatsache ein, da§ es sicherlich nicht darum gegangen sein konnte, das erste oder beste Schriftsteller-System<br />
zu bauen. Dies wollte er nicht einmal den Programmierern unterstellen. Schlie§lich orientieren die sich schon<br />
lange an biologischen Vorbildern. Unser Schriftsteller kennt sich da zugegebenerma§en nicht aus; ihm fallen<br />
nur Schlagworte ein: neuronal, genetisch, Virus, KŠfer (die er allerdings nicht zuordnen kann).<br />
"Wie konnte ich Ÿbersehen, da§ bei den unterschiedlichen Perspektiven auf und Herangehensweisen<br />
an die Schriftstellerei gar keine Vergleichbarkeit gegeben, also auch keine einheitliche Bewertung mšglich<br />
ist? ", fragt er sich verwundert. "Hey, ich will beschreiben und sie wollen konstruieren; ich will explizieren,<br />
und sie wollen generieren; ich will Kontrolle, sie wollen Selbstorganisation. Kein Wunder, da§ kaum eine<br />
gemeinsame Basis vorhanden ist." Kein Wunder auch, so wird ihm bewu§t, da§ er so viele Schwierigkeiten<br />
hatte bei dem Versuch, ein System zu bauen.<br />
"Das mit dem Latein lernen ist wohl ein Scherz", denkt er dann, "aber da§ wir fŸr die Nachwelt<br />
forschen, hatte ich tatsŠchlich vergessen." Hierzu gehšrt aber unbedingt eine auf prŠziser Analyse basierende<br />
explizite Wissensvermittlung. Denn was wŠre wohl, wenn er seinen Studenten im Theorieteil seines Kurses<br />
'Creative Writing:Theory and Praxis' einige Romane mit den Worten 'Lest dies und ihr wi§t, was<br />
Schriftstellerei ist' Ÿbergeben oder mit Šhnlich suggestiven Worten das IPS starten wŸrde?<br />
So sitzt er da, unser Schriftsteller, den Kopf sinnierend auf die rechte Hand gestŸtzt, und denkt an all<br />
die, die schon immer behauptet haben, da§ man Programmiererei und Schriftstellerei zwar nicht miteinander<br />
vereinbaren, aber doch immerhin unter- oder nebeneinander akzeptieren kann. "Wie gut hŠtten wir<br />
zusammenarbeiten kšnnen", grŸbelt er, einen wichtigen Aspekt hinzudenkend, "jeder hŠtte vom anderen<br />
lernen, von den Erkenntnissen des anderen pr<strong>of</strong>itieren kšnnen." Schlie§lich ist er Ñbei aller AnerkennungÑ<br />
keineswegs uneingeschrŠnkt zufrieden mit dem IPS. S<strong>of</strong>ort kommen ihm daher einige bedeutungsschwere<br />
WortungetŸme wie 'Interpr<strong>of</strong>essionelle Kollaboration' oder 'Synergistische Progression' in den Sinn.<br />
"Praktisch, unter solchen cover terms subsummierbar zu sein", so sein vorlŠufiges Fazit, "aber es<br />
wird schwierig sein, allen Beteiligten ihre Bedeutung so zu vermitteln, da§ der Sinn und Zweck des ganzen<br />
Ÿber kurz oder lang nicht doch wieder in Frage gestellt wird und es zu denselben HahnenkŠmpfen kommt wie<br />
gerade gehabt. Na, und vom ErklŠrungsbedarf der Unbeteiligten mal ganz zu schweigen".<br />
Ganz plštzlich, als hŠtten diese †berlegungen eine Sperre beseitigt, kommt ihm die Idee zu einem<br />
neuen Roman. Ohne lange zu Ÿberlegen beginnt er mit der Arbeit. Wenn er seine Ideen ausgearbeitet hat und<br />
der Roman geschrieben ist (wie lange hŠtte es wohl gedauert, die entsprechenden Parameterwerte des IPS zu<br />
setzen?), dann will er wieder daran gehen herauszufinden, was eigentlich eine 'Idee' ist. Dazu wird er<br />
vielleicht ein weiteres wundersames Programm schreiben (ohne da§ er den Anspruch hat, da§ es jemals<br />
perfekt funktionieren wird). Jetzt wei§ er schlie§lich, in welcher Hinsicht eine solche Vorgehensweise als<br />
[ 36 ]
Foundations <strong>of</strong> Cognitive Science <strong>Carstensen</strong> 01.03.1999<br />
lobenswert, und in welcher sie als lŠcherlich zu bewerten ist. Reden will er z.B., mit Leuten (Kollegen,<br />
Interessierten, Nachkommen), die an Schriftstellerei interessiert sind, und nicht an Programmiererei, so<br />
schwierig und wichtig die auch immer ist. Mit diesen Leuten wird er sich dann vor seinen Laptop setzen, das<br />
Icon des IPS anklicken (es zeigt eine Vielzahl kleiner schwarzer Kisten, die in einer Art Netz miteinander<br />
verbunden sind) und ihnen beschreiben und gleichzeitig erklŠren, was gerade passiert. Wenn sie Fragen zu<br />
dem Programm selbst haben (was unweigerlich passieren wird), wird er sie an die Programmierer verweisen.<br />
Mit einigen von diesen hat er nŠmlich mittlerweile ein ÔInterdisziplinŠres Institut fŸr SchriftstellereiÕ<br />
gegrŸndet. Und selbstverstŠndlich gibt es einen Studiengang ÔComputational Creative WritingÕ, in dem seine<br />
Studenten auch lernen, IPSe zu entwickeln. Klar, da§ sie damit eine optimale Ausbildung erhalten und da§<br />
sich Ihnen dadurch ganz neue Berufsperspektiven eršffnen (auch wenn es immer wieder schwierig ist, diese<br />
neuen QualitŠten nach au§en hin angemessen darzustellen).<br />
Und wenn er schon gestorben ist, werden ihm die Leute noch dankbar dafŸr sein, da§ sie Konzepte<br />
wie 'Idee' und ÔPlotÕ nicht immer wieder neu entdecken mŸssen.<br />
[ 37 ]