Levels of Description
Follow-up on questions raised last week:
1) Would Searle allow that a computer that modelled the
neurochemical processes of the brain might have real
intentionality (i.e. might really think)?
No, because only the brain has the right kind of causal
See the Scientific American article by Searle.
2) According A
to a functional
definition of a process, does
the same input always yield
the same output?
Not necessarily. Might be a
range of appropriate outputs.
E.g. roulette wheel, dice
Levels of Description
Processes can be explained at different levels of
E.g. How does a car work?
Explain to a user vs. explain to a mechanic vs.
explain to a physics student
Three levels of description:
1) Environmental level (also called intentional level): what does
the thing do? What are the externally observable inputs and
outputs (engineering: black box)? What are its capacities and
2) Computational level (also called design level or algorithmic
level): how does the thing work? What method is used to give
certain outputs to certain inputs? What is the internal
organization (engineering: white box)? What are the inputs and
outputs of its internal states?
3) Physical level: : How are the processes physical realized? How
can its workings be described as the actions of physical laws
on physical materials?
Example: multiplication by calculator vs. mind
14 x 5 = 70 14 x 5 = 70
14 + 14 + 14 + 14 + 14 multiplication table,
silicon chip, 1s and 0s
Each level of description supervenes on the lower level.
What a calculator does (e.g. multiplication) depends on its
program (e.g. repetitive addition) which depends on its
What the brain does (i.e. think) depends on its
organization and computational processes which
depend upon its physical make-up
Cognitive science and the three levels
Cognitive science focuses on the computational level of description
The environmental level is the view from outside. We can see what
the brain does, what we want to know is how it does it. But
learning the brain’s s capacities and limitations does help us to figure
out what processes are involved.
The physical level is not very interesting. The mind could be realized
in a different physical form (multiple realizability).
Also, physical level has too fine granularity and is too complicated.
Can hardly meaningfully explain a brain in terms of action of
molecules, or even firing of neurons.
However, learning about the physical level helps to understand the t
capacities and limitations of the brain.
Levels within levels
But, there is not just one computational level.
The distinction between levels is sometimes unclear.
There are levels within levels.
The organization of the internal functional modules is a
computational description of the whole brain
The organization of smaller functional units of a functional
module is a computational description of that functional module
The organization of sub-routines within a functional unit is a
computational level description of that functional unit
Each module, functional unit or sub-routine can itself be
described at an environmental level, a computational
level or a physical level, e.g. what does this functional
unit do (in relation to other functional units of the
brain/module), how does it do it, and how is its
operation physically realized?
Attempts to describe the brain at a computational level
can zoom in or zoom out to focus on different levels of
One attempt to describe the
functional architecture of the brain.
Homunculus: little man in the brain
Originally, a characterization of
Descartes’ idea that the mind was
situated in the brain (like a little
man) doing the thinking, receiving
inputs and sending out outputs.
Also called the Cartesian theatre
fallacy (“fallacy” because of infinite
regress when again considering the
brain of the little man and so on).
Cognitive science use of the word “homunculus” refers to
autonomous functional units of the brain.
Homuncular functionalism is the attempt to describe the
operation of the brain in terms of (computational-level
descriptions of) the interaction of smaller and smaller
units, until the most basic units can be explained in
simple, mechanistic terms (ideally, explainable at a
physical level of description).
Undischarged homunculi: any homunculus that is posited
but whose internal workings are not broken down into
more basic levels of explanation.
Undischarged homunculi are a problem for any
description of the brain.
We want to avoid resorting to “miracles” in explaining the mind.
Modularity of Mind
A computational level description of the mind involves
describing the mind’s s functional architecture, i.e. how
the different functions of the mind are carried out by
different structures of the brain.
The mind can be described as organized into different
functional modules, i.e. different functionally distinct
Precursor to theory of
Popular in 19 th century
Specific mental faculties
associated with particular
locations in the brain
Mental abilities and
personality could be read
off from bumps on the
Modern theory of mental modules put
forward by Jerry Fodor in 1983.
Different parts of the brain are specialized in
the performing of different types of function.
Modules: a division of labor in the brain.
Unlike in phrenology, the modules do not have to occupy
a specific location – they can be spread out over different
areas of the brain
The clearest example of
modularity in the brain.
Vision, hearing, smell
touch and taste
perception are each
handled by distinct
specialized areas of the
Characteristics of modules
According to Fodor, a specialized faculty of the mind must
meet the following criteria in order to be true modules:
1) Domain specificity
3) Informational encapsulation, modules need not refer to
other psychological systems in order to operate
7) Fixed neural architecture.
1) Domain specificity: modules are specialized and only
operate on one kind of input, e.g. the vision module
only operates on visual input – it does not respond to
input from other sensory organs or from other parts of
2) Inaccessibility: you cannot perceive the internal
workings of a module, e.g. you cannot look into your
mind and find out how your visual system works. You
can only perceive the output, e.g. you just see a scene in
front of you
3) Information encapsulation: A module cannot be
affected by input from other parts of the brain. Your
visual perception is not affected by what you hear or
feel or know.
Illustrated by the perseverance of optical illusions. Your
visual system does not correct an optical illusion, even
when you know that it is an illusion.
E.g. the Müller-Lyer
See animated version at:
4) Automatic: modules perform their function
automatically and it is impossible to turn off the function,
e.g. if someone brings a hot fried chicken into class, you
cannot decide not to smell it (unfortunately)
5) Fast: modules work extremely fast, so, for example, you
appear to sense things immediately. You are unaware of
any delay between opening your eyes and seeing the
6) Innate: the capacities of a module are inborn, and not
developed through experience
7) Fixed neural architecture: there are particular neural
systems associated with particular modules
Modules within modules
Modules can very often be broken down to sub-modules.
e.g. the visual system appears to contain the following
• Color processing module
• Form processing module
• Motion processing module
Evidence for modularity
1) Neural imaging, e.g. fMRI (functional Magnetic
Resonance Imaging) scans, show distinct structures of
the brain are active when subjects are engaged in
certain tasks. E.g.: Regions in the lateral temporal
association cortex light up when subjects engage in an
object recognition task.
An fMRI scan
2) Brain injuries. An injury (or lesion) in a certain region
of the brain results in characteristic problems, e.g.
injury in the right side temporal lobe results in the
inability to recognize objects (visual agnosia). A
patient with this problem can describe the visual
appearance of an object, and yet not know what it is.
In “The Man who Mistook his Wife for a Hat” by
Oliver Sacks, a man with visual aphasia was handed a
rose. He described it as the object as “a a convoluted
red form with a linear green attachment”, , but was
unable to recognize it as a rose.
Lesion in brain structure A disrupts function X but not function Y.
Allows one to infer that functions X and Y are partially independent.
Grailog (using ‘blank node’ for unnamed instance):
Lesion in brain structure A disrupts function X but not function Y.
Lesion in brain structure B disrupts function Y but not function X.
Allows one to infer that functions X and Y are mostly independent.
Face recognition module
The form recognition module itself contains at least one
sub-module: the face recognition module.
Some people have a deficit (called prosopagnosia or face
blindness) in function X = face recognition, , but are good in
function Y = object recognition. . (People with face blindness
can recognize friends and acquaintances by other visual cues,
e.g. haircut, glasses, body shape, etc.)
Fewer people have a deficit in function Y = object recognition,
but are good in function X = face recognition.
The brain region A = face area and a region that could be called
B = object area might be involved in those deficits
Noam Sagiv: Understanding Face Blindness
Modules vs. Central Processsing
According to Fodor, the brain is divided into two types of functional unit:
modules and the central processing system.
Modules are automatic, fast-acting, acting, unconscious. (Parts of Freud’s s Id?)
The central processing system is slow, voluntary and conscious.
(Analogous to Freud’s s Ego?)
Modules present results of internal processing to the central processing
system. The central processing system has access to the inputs of many
systems, and takes care of the logical relations between the various
contents and inputs and outputs.
The operation of the central processing system is what you experience.
You see and hear the results of the visual and auditory modules, you
compare this perception to input from your memory, your imagination,
etc. and form conclusions, make decisions, etc.
Affected by learning
Shared knowledge conceptualization
(using a logical formalization)
Taxonomy: Only subclass-superclass conceptualizations
Partonomy: Only subpart-superpart conceptualizations
An OWL ontology enriched with rules for brain anatomical structures
was developed at the University of Rennes, France
Ammar Mechouche, Christine Golbreich, Bernard Gibaud:
Semantic description of brain MRI images
Within the Foundational Model of Anatomy (FMA) Ontology,
a Protege partonomy for brain anatomy was developed at the
University of Washington, Seattle, USA
Jose Mejino et al.: Challenges in Reconciling Different Views of
Neuroanatomy in a Reference Ontology of Anatomy
Brain ontologies (cont.)
Exploration of Foundational Model of Anatomy's brain partonomy
Brain ontologies (cont.)
BrainML (http://www.brainml.org) is an evolving standard
XML metaformat to exchange neuroscience data and models.
It includes a partonomy for neural structure / anatomy:
CNS [central nervous system]
CMA [cingulate motor area]
dorsal prefrontal cortex
F1/MI [primary motor area]
F2/F7 PMd [premotor area (dorsal)]
F3/F6 SMA/preSMA [(pre-)supplementary motor area]
F4/F5 PMv [premotor area (ventral)]
lateral prefrontal cortex
[FEF] frontal eye fields
medial prefrontal cortex
MEF/SEF [medial eye field, supplemental eye field]
. . .
Sterelny, , Kim, The Representational Theory of Mind, , Chapter 2, pgs. 19-41
More optional readings:
Pinker, Stephen, The Language Instinct, , Ch. 3 “Mentalese”,, pgs. 55-82.
Review by Mark Alford, 2000. http://alford.fastmail.us/pinker.html
Dennett, Daniel, Brainstorms, , Ch. 6 “A A Cure for the Common Code”, , pgs. 90-