<<

. 25
( 62 .)



>>

conceptualized mental representation. (So much so, in fact, that they
were sometimes overenthusiastically characterized as dispensing with the
notion of representation altogether.) But as they have been developed
primarily within individualistic frameworks, they do not themselves con-
stitute externalist views of representation. They reconceptualize the in-
ternal form and dynamics of mental representation, but do little by way
of viewing mental representation as, in some sense, essentially embodied
or embedded.5
The second problem for encoding views is that the medium for en-
coding is, by de¬nition, some type of code, and codes themselves need
to be interpreted. By virtue of what is such interpretation performed?
Either by virtue of some other type of code “ in which case we face the
same question again “ or by virtue of some brute noninterpretative and so
Representation, Computation, and Cognitive Science 149

noncoding process “ in which case it is dif¬cult to see what role the initial
appeal to codes (and thus interpretation) is doing. Thus, the appeal to
mental encoding either leads to a regress or it was not necessary in the
¬rst place. This dilemma constitutes an objection to the use of the notion
of encoding in understanding mental representation in particular, rather
than representation in general, since in other cases one or the other of
these two horns can be grasped.
For example, consider public codes, such as communicated natural
language and Morse code. These are interpreted by people, with inter-
pretation mediated by knowledge of the conventions governing those
codes. Since public codes are not self-interpreting, grasping the ¬rst horn
of the dilemma above to explain how public codes are interpreted is
unproblematic.
Alternatively, consider computational languages. These can be layered
on top of one another through compilation and translation, with the
most basic language, the machine language, engineered directly in the
circuitry of the machine. Although this combination of compilation and
engineering is sometimes taken as an analogue for how our language
of thought is instantiated in the head, note that while compilation does
involve encoding a higher-level language in a lower-level language, engi-
neering does not. Electronic circuitry is not a code for machine languages
but an implementation of them. Thus, in this case we can grasp the sec-
ond horn of the dilemma, but we do so by giving up the metaphor of
encoding.
One might object that this dilemma argument takes too literally what
is only an analogy between mental representations and codes. All that
cognitive scientists mean by mental representations are discrete, internal
structures that correspond to things in the world. It is these structures and
their properties, not objects in the world and their properties that cogni-
tive processes are sensitive to. Things in the world drop out as irrelevant
for cognitive processing once the structures to which they correspond
are formed or activated, and it is for this reason that cognition is method-
ologically solipsistic.
This view can be expressed by saying that the process of mental rep-
resentation is simply that of symbol formation, and that cognition is
symbol crunching. Mental representations are natural symbols in that
they are generated by brute causal relations between cognizers and
their worlds, and so unlike conventional symbols, such as those on road
signs and in written languages, they do not require interpretation to be
symbols.
Individualism and Externalism in Cognitive Sciences
150

I suspect that this softening of the parallel between mental repre-
sentations and codes accurately captures what many cognitive scientists
(particularly psychologists) think is right about the language of thought
hypothesis. Such softening, however, raises or leaves open questions that
strict encoding views close off.
First, by placing more weight on mental representations as natural
symbols, it weakens the connection between mental representation and
computation, since computational symbols are conventional. If mental
representations are not, strictly speaking, codes but natural symbols, then
we need some account of the basis for not only how the symbols get
their original meaning, but for the syntax that governs their processing.
Fodor™s own view “ that the symbols in the language of thought, con-
cepts, are innate, as is the syntax of that language “ is one answer to this
question, but one that very few cognitive scientists have been prepared to
swallow.6
Second, this softening also highlights the question of what is special
about mental representation. There are myriad causal dependencies be-
tween an organism™s internal structures and states of the world. Why are
those involving my perceptual apparatus and my mind symbolic, while
those that concern my digestive system or the state of tension in the mus-
cles in my leg merely causal? Philosophical projects in “psychosemantics,”
such as informational semantics and teleosemantics, have inevitably ap-
pealed to other forms of representation in articulating their vision of
mental representation. The paradigm of these have been the internal
states of measuring instruments, such as thermostats and fuel gauges, in
the former case and the functioning and products of biological organs
and behaviors, such as the heart and the dance of bees, in the latter.
This de¬‚ationary naturalism about mental representation is no doubt a
good thing. But whether such views can be happily married to something
like encoding views of representation, or to an individualistic view of the
mind, seems far from clear.7


4 the debate over marr™s theory of vision
Thus, individualism receives some support from the computational and
representational theories of mind, and so from the cognitive science
community in which those theories have been in¬‚uential. But I have also
indicated that the claim that a truly explanatory cognitive science will be
individualistic has an epistemic basis more like a gesture than a proof.
One way to substantiate this second view in light of the ¬rst is to turn to
Representation, Computation, and Cognitive Science 151

examine the continuing philosophical debate over whether David Marr™s
celebrated theory of early vision is individualistic.
Marr™s theory occupies a special place in cognitive science as well as
in the individualism-externalism debate. Marr was trained in mathemat-
ics and theoretical neuroscience at Cambridge in the 1960s, and spent
most of the 1970s at both the AI Lab and the Department of Brain and
Cognitive Sciences at MIT before dying, tragically, of leukemia at the
age of thirty-¬ve. His ability to draw on and contribute to neuroscience,
arti¬cial intelligence, psychology, and philosophy exempli¬ed cognitive
science at its best. Although many of the speci¬c algorithms that Marr
and his colleagues proposed have been superceded by subsequent work in
the computational theory of vision, the sweep and systematicity of Marr™s
views, especially as laid out in Vision: A Computational Investigation into
the Human Representation and Processing of Visual Information, have given
their views continuing in¬‚uence in the ¬eld. The importance of Marr™s
theory for the individualism-externalism debate can perhaps best be un-
derstood historically and in light of the cognitive science gesture made
by individualists in the late 1970s.8
In the ¬nal section of “Individualism and the Mental,” Burge had sug-
gested that his thought experiments and the conclusion derived from
them “ that mental content and thus mental states with content were
not individualistic “ had implications for computational explanations
of cognition. These implications were twofold. First, purely computa-
tional accounts of the mind, construed individualistically, were inade-
quate. Second, insofar as such explanations did appeal to a notion of
mental content, they would fail to be individualistic. It is the latter of
these ideas that Burge pursued in “Individualism and Psychology,” in
which he argued, strikingly, that Marr™s theory of vision was not individ-
ualistic. This was the ¬rst attempt to explore in detail a widely respected
view within cognitive science vis-` -vis the individualism issue, and it was
a
a crucial turning point in moving beyond the cognitive science gesture
toward a style of argument that really does utilize empirical practice in
cognitive science itself.9
What is called “Marr™s theory of vision” is an account of a range of pro-
cesses in early or “low-level” vision that was developed by Marr and col-
leagues, such as Ellen Hildreth and Tomas Poggio, at the Massachusetts
Institute of Technology. These processes include stereopsis, the percep-
tion of motion, and shape and surface perception, and the approach is
explicitly computational. Marr™s Vision became the paradigm expression
of the approach, particularly for philosophers, something facilitated by
Individualism and Externalism in Cognitive Sciences
152

Marr™s comfortable blend of computational detail with broad-brushed,
programmatic statements of the perspective and implications of his ap-
proach to understanding vision. For example, in his ¬rst chapter, entitled
“The Philosophy and the Approach,” Marr recounts the realization that
represented a critical breakthrough in the methodology of the study of
vision, as follows:

The message was plain. There must exist an additional level of understanding
at which the character of the information-processing tasks carried out during
perception are analyzed and understood in a way that is independent of the
particular mechanisms and structures that implement them in our heads. This
was what was missing “ the analysis of the problem as an information-processing
task. . . . if the notion of different types of understanding is taken very seriously,
it allows the study of the information-processing basis of perception to be made
rigorous. It becomes possible, by separating explanations into different levels, to
make explicit statements about what is being computed and why and to con-
struct theories stating that what is being computed is optimal in some sense or is
guaranteed to function correctly.10

Over the last twenty years, work on Marr™s theory of vision has continued,
extending to cover the processes constituting low-level vision more ex-
tensively. By and large, the philosophical literature on individualism that
appeals to Marr™s theory has been content to rely almost exclusively on
Marr™s Vision in interpreting the theory.
As the passage from Marr quoted above suggests, critical to the com-
putational theory that Marr advocates is a recognition of the different
levels at which one can “ indeed, for Marr, must “ study vision. Accord-
ing to Marr, there are three levels of analysis to pursue in studying any
information-processing device. First, there is the level of the computa-
tional theory (hereafter, the computational level), which speci¬es the goal
of the computation, and at which the device itself is characterized in
abstract, formal terms as “mapping from one kind of information to an-
other.” Second is the level of representation and algorithm (hereafter, the
algorithmic level), which selects a “representation for the input and output
and the algorithm to be used to transform one into the other.” And third
is the level of hardware implementation (hereafter, the implementational
level), which tells us how the representation and algorithm are realized
physically in an actual device.11
Philosophical discussions, like Marr™s own discussions, have been fo-
cused on the computational and algorithmic levels for vision, what Marr
himself characterizes, respectively, as the “what and why” and “how” ques-
tions about vision. As we will see, there is particular controversy over what
Representation, Computation, and Cognitive Science 153

the computational level involves. In addition to the trichotomy of levels
at which an informational-processing analysis proceeds, there are two fur-
ther interesting dimensions to Marr™s approach to vision that have not
been widely discussed in the philosophical literature. These add some
complexity not only to Marr™s theory, but also to the issue of how “com-
putation” and “representation” are to be understood in it.
The ¬rst is the idea that visual computations are performed sequen-
tially in stages of computational inference. Marr states that the overall
goal of the theory of vision is “to understand how descriptions of the
world may ef¬ciently and reliably be obtained from images of it.” He
views the inferences from intensity changes in the retinal image to full-
blown three-dimensional descriptions as proceeding via the construction
of a series of preliminary representations: the raw primal sketch, the full
primal sketch and the 2 1/2-D sketch. Call this the temporal dimension
to visual computation. The second idea is that visual processing is sub-
ject to modular design, and so particular aspects of the construction
of 3-D images “ stereopsis, depth, motion, and so on “ can be investi-
gated in principle independently. Call this the modular dimension to visual
computation.12
A recognition of the temporal and modular dimensions to visual com-
putation complicates any discussion of what “the” computational and
algorithmic levels for “the” process of vision are. Minimally, in identify-
ing each of Marr™s three levels, we need ¬rst to ¬x at least the modular
dimension to vision in order to analyze a given visual process; and to ¬x
at least the temporal dimension in order to analyze a given visual compu-
tation. We will see how these points interact with the debate over Marr™s
theory shortly.
Burge™s argument that Marr™s theory is not individualistic is explicitly
and fully presented in the following extended passage:

(1) The theory is intentional. (2) The intentional primitives of the theory and
the information they carry are individuated by reference to contingently existing
physical items or conditions by which they are normally caused and to which they
normally apply. (3) So if these physical conditions and, possibly, attendant physi-
cal laws were regularly different, the information conveyed to the subject and the
intentional content of his or her visual representations would be different. (4) It
is not incoherent to conceive of relevantly different (say, optical) laws regularly
causing the same non-intentionally, individualistically individuated physical regu-
larities in the subject™s eyes and nervous system. . . . (5) In such a case (by (3)) the
individual™s visual representations would carry different information and have
different representational content, though the person™s whole non-intentional
physical history . . . might remain the same. (6) Assuming that some perceptual
Individualism and Externalism in Cognitive Sciences
154

states are identi¬ed in the theory in terms of their informational or intentional
content, it follows that individualism is not true for the theory of vision.13

The second and third premise make speci¬c claims about Marr™s theory
of vision, while the ¬rst premise, together with (4) and (5), indicate the
af¬nity between this argument and Burge™s original argument for indi-
vidualism, cast in Twin Earth-like terms, that we discussed in Chapter 4.
Burge concentrates on defending (2)“(4), largely by an appeal to
the ways in which Marr appears to rely on “the structure of the real
world” in articulating both the computational and algorithmic levels for
vision. Marr certainly does make a number of appeals to this structure
throughout Vision. For example, he says

The purpose of these representations is to provide useful descriptions of aspects
of the real world. The structure of the real world therefore plays an important
role in determining both the nature of the representations that are used and the
nature of the processes that derive and maintain them. An important part of the
theoretical analysis is to make explicit the physical constraints and assumptions
that have been used in the design of the representations and processes . . .

And Marr does claim that the representational primitives in early vision
(such as “blobs, lines, edges, groups, and so forth”) “correspond to real
physical changes on the viewed surface.” Together these sorts of com-
ments have been taken to support (2) and (3) in particular.14
Marr™s appeals to the “structure of the real world” do not themselves,
however, imply a commitment to externalism. For these remarks can be,
and have been, interpreted differently. Consider two alternative interpre-
tations available to individualists.
The ¬rst is to see Marr as giving the real world a role to play only
in constructing what he calls the computational theory. Since vision is
a process for extracting information from the world in order to allow
the organism to act effectively in that world, clearly we need to know
something of the structure of the world in our account of what vision is
for, what it is that vision does, what function vision is designed to perform.
If this is correct, then it seems possible to argue that one does not need
to look beyond the head in constructing the theory of the representation
and algorithm. As it is at this level that visual states are taxonomized qua

<<

. 25
( 62 .)



>>