<<

. 27
( 62 .)



>>

encompassing but going beyond the function-theoretic characterizations
Representation, Computation, and Cognitive Science 161

of cognitive capacities that Egan identi¬es, or they must allocate those
characterizations to the algorithmic level. The latter option simply exacer-
bates the “gap” problem identi¬ed above. But the former option lumps
together a variety of quite different things under the heading of “the
computational level,” and subsequently fails to recognize the constraints
that computational assumptions bring in their wake. The temporal and
modular dimensions to Marr™s theory exacerbate the problem here.
There is a large issue lurking here concerning how functionalism
should be understood within computational approaches to cognition,
and correspondingly how encompassing such approaches really are.
Functionalism has usually been understood as offering a way to reconcile
our folk psychology, our manifest image of the mind, with the developing
sciences of the mind, even if that reconciliation involves revising folk psy-
chology along individualistic lines. And computationalism has been taken
to be one way of specifying what the relevant functional roles are: They
are “computational roles.” But suppose that Egan is right about Marr™s un-
derstanding of the notion of computation as a function-theoretic notion,
and we accept the view that this understanding is shared in computa-
tional approaches to cognition more generally. Then the corresponding
version of functionalism about the mind must be function-theoretic in
Egan™s sense: It will “prescind from the actual environment,” as must
the computational level, but also from the sort of internal causal role
that functionalists have often appealed to. Cognitive mechanisms, on
this view, take mathematically characterizable inputs to deliver mathe-
matically characterizable outputs, and qua computational devices, that
is all. Any prospects for the consilience of our “two images” must lie
elsewhere.
In arguing for the nonintentional character of Marr™s theory of vision,
Egan presents an austere picture of the heart of computational psychol-
ogy, one which accords with the individualistic orientation of compu-
tational cognitive science as it has traditionally been developed, even
if computational psychologists have sometimes attempted to place their
theories within more encompassing contexts. We have seen that Chomsky
shares this austere view of at least Marr™s theory in suggesting that “con-
tent” and “representation of” are not terms that the theory traf¬cs in.
How plausible is such a view of computational psychology?
One general problem, as Larry Shapiro points out, is that a compu-
tational theory of X tells us very little about the nature of X, including
information suf¬cient to individuate X as (say) a visual process at all.
While Egan seems willing to accept this conclusion, placing this sort of
Individualism and Externalism in Cognitive Sciences
162

concern outside of computational theory proper, this response highlights
a gap between computational theory, austerely construed, and the myr-
iad theories “ representational, functional, or ecological in nature “ with
which such a theory must be integrated for it to constitute a complete,
mechanistic account of any given cognitive process. The more austere the
account of computation, the larger this gap becomes, and the less a com-
putational theory contributes to our understanding of cognition. One
might well think that Egan™s view of computational theory in psychology
errs on the side of being too austere in this respect.25
We can relate this more directly to Chomsky™s views here. If Marr™s
theory in particular, or computational theories of cognitive abilities more
generally, do not contain an account of the content of (say) visual states,
or what those states are representations of, then surely they leave out
something critical about perception or cognition more generally. Part
of the promise of computational approaches to cognition has been to
show how computation and representation go hand-in-hand. But on the
Egan-Chomsky view, it now seems that “representation” itself is a term of
art within such theories with no connection either to intentionality or to
the underlying, algorithmic level. The promise cannot be ful¬lled.
Making Egan™s view less austere in a way that would bridge the gap be-
tween computational and various other levels (both “higher” and “lower”)
would likely require one of two things. First, incorporating either part of
what Egan thinks of as the intentional interpretation of the computational
theory within that theory. Or, second, offering a richer conception of the
mechanisms speci¬ed at the computational level that compromise their
context independence. Either way, addressing this problem removes the
bases for Egan™s argument for an individualistic view of computational
psychology.
As I shall make clear in the next two sections, I share the view that
there may be no fact of the matter about whether Marr™s theory employs
a notion of narrow or wide content. Thus, I am sympathetic to Egan™s
development of an interpretation of Marr™s framework that bypasses this
issue. But the austere conception of computation shared by Egan and
Chomsky seems to me an interpretation with too high a price.


6 exploitative representation and wide
computationalism
As a beginning on an alternative way of thinking about computation and
representation, consider an interesting difference between individualistic
Representation, Computation, and Cognitive Science 163

and externalist interpretations of Marr™s theory that concerns what it is
that Marrian computational systems have built into them. Individualists
about computation, such as Egan and Segal, hold that they incorporate
various innate assumptions about what the world is like. This is because
the process of vision involves recovering 3-D information from a 2-D
retinal image, a process that without further input would be under-
determined. The only way to solve this underdetermination problem is
to make innate assumptions about the world. The best known of these is
Ullman™s rigidity assumption, which says that “any set of elements under-
going a two-dimensional transformation has a unique interpretation as a
rigid body moving in space and hence should be interpreted as such a
body in motion.” The claim that individualists make is that assump-
tions like this are part of the computational systems that drive cognitive
processing. This is the standard way to understand Marr™s approach to
vision.26
Externalists like Shapiro have construed this matter differently. Al-
though certain assumptions must be true of the world in order for our
computational mechanisms to solve the underdetermination problem,
these are simply assumptions that are exploited by our computational
mechanisms, rather than innate in our cognitive architecture. That is,
the assumptions concern the relationships between features of the exter-
nal world, or between properties of the internal, visual array and prop-
erties of the external world, but those assumptions are not themselves
encoded in the organism. To bring out the contrast between these two
views, consider a few simple examples.27
An odometer keeps track of how many miles a car has traveled, and it
does so by recording the number of wheel rotations and being built to
display a number proportional to this number. One way it could do this
would be for the assumption that 1 rotation = x meters to be part of its
calculational machinery. If it were built this way, then it would plug the
value of its recording into an equation representing this assumption, and
compute the result. Another way of achieving the end would be to be
built simply to record x meters for every rotation, thus exploiting the fact
that 1 rotation = x meters. In the ¬rst case it encodes a representational
assumption, and uses this to compute its output. In the second, it contains
no such encoding but instead uses an existing relationship between its
structure and the structure of the world, in much the way that a polar
planimeter measures the area of closed spaces of arbitrary shapes without
doing any representation crunching. Note that however distance traveled
is measured, if an odometer ¬nds itself in an environment in which the
Individualism and Externalism in Cognitive Sciences
164

relationship between rotations to distance traveled is adjusted “ larger
wheels, or being driven on a treadmill “ it will not function as it is supposed
to, and misrepresent that distance.28
Consider two different (unconscious) strategies for learning how to hit
a baseball that is falling vertically to the ground. Since the ball accelerates
at 9.8 ms2 , there is a time lag between swinging and hitting. One could
either assume that the ball is falling (say, at a speci¬c rate of accelera-
tion), and then use this assumption to calculate when one should swing;
alternatively, one could simply aim a certain distance below where one
perceives the ball at the time of swinging (say, two feet). In this latter case
one would be exploiting the relationship between acceleration, time, and
distance without having to encode that relationship in the assumptions
one brings to bear on the task.
Exploitative representation is an ef¬cient form of representation when
there is a constant, reliable, causal or informational relationship between
what a device does and how the world is. Thus, rather than encode the
structure of the world and then manipulate those encodings, “smart
mechanisms” can exploit that constancy. As the odometer example sug-
gests, the encoding view also presupposes some mind-world constancy,
but this is presumed only for “input” representations to start the compu-
tational process on the right track. Exploitative representation makes a
deeper use of mind-world constancies.
The fact that there are these two different strategies for accomplishing
the same end should, minimally, make us wary of accepting the claim that
innate assumptions are the only way that a computational system could
solve the underdetermination problem. But I also want to develop the
idea that our perceptual system in particular and our cognitive systems
more generally typically exploit rather than encode information about
the world and our relationship to it, as well as say something about where
Marr himself seems to stand on this issue.
An assumption that Egan makes and that is widely shared in the philo-
sophical literatures both on individualism and computation is that at least
the algorithmic level of description within computational psychology is
individualistic. The idea here has, I think, seemed so obvious that it has
seldom been spelled out: Algorithms operate on the syntactic or formal
properties of symbols, and these are intrinsic to the organisms instantiat-
ing the symbols. We might challenge this neither by disputing how much
is built into Marr™s computational level, nor by squabbling over the line
between Marr™s computational and algorithmic levels, but, rather, by ar-
guing that computations themselves can extend beyond the head of the
Representation, Computation, and Cognitive Science 165

organism and involve the relations between individuals and their environ-
ments. This position, wide computationalism, holds that at least some of
the computational systems that drive cognition reach beyond the limits of
the organismic boundary. Its application to Marr™s theory of vision marks
a departure from the parameters governing the standard individualist-
externalist debate over that theory. Wide computationalism constitutes
one way of thinking about the way in which cognition, even considered
computationally, is “embedded” or “situated” in its nature, and it provides
a framework within which an exploitative conception of representation
can be pursued.
The basic idea of wide computationalism is simple. Traditionally, the
sorts of computation that govern cognition have been thought to begin
and end at the skull. But why think that the skull constitutes a magic
boundary beyond which true computation ends and mere causation
begins? Given that we are creatures embedded in informationally rich
and complex environments, the computations that occur inside the head
are an important part but are not exhaustive of the corresponding com-
putational systems. This perspective opens up the possibility of exploring
computational units that include the brain as well as aspects of the brain™s
beyond-the-head environment. Wide computational systems thus involve
minds that literally extend beyond the con¬nes of the skull into the world.
In the terms introduced earlier in Part Two, they have wide realizations
(see Figures 7.1 and 7.2).

An example: Multiplying
Standard Computationalism
with only internal symbols


Computational system ends at the skull;
computation must be entirely in the
head.


1. Code external world.
2. Model computations between internal
representations only.
3. Explain behavior, based on outputs from
Step 2.



¬gure 7.1. Standard Computationalism
Individualism and Externalism in Cognitive Sciences
166


An example: Multiplying with Wide Computationalism
internal and external symbols

Computational system can extend
beyond the skin into the world;
computation may not be entirely in the
head.

1. Identify representational or informational
forms -- whether in the head or not -- that
constitute the relevant computational
system.

2. Model computations between these
representations.

3. Behavior itself may be part of the wide
computational system.
¬gure 7.2. Wide Computationalism


One way to bring out the nature of the departure made by wide com-
putationalism draws on a distinction between locational and taxonomic
conceptions of psychological states. Individualists and externalists are
usually presented as disagreeing over how to taxonomize or individuate
psychological states, but both typically presume that the relevant states
are locationally individualistic: They are located within the organismic en-
velope. What individualists and externalists typically disagree about is
whether in addition to being locationally individualistic, psychological
states must also be taxonomically individualistic. This is, as we have seen,
what is usually at issue in the debate over Marr™s theory of vision, where
the focus has been on whether Marr uses a wide or a narrow notion
of content. Wide computationalism, however, rejects this assumption of
locational individualism by claiming that some of the “relevant states” “
some of those that constitute the relevant computational system “ are
located not in the individual™s head but in her environment.
If some cognitive systems, wide computational systems, are not loca-
tionally individualistic, then they, and thus the states that constitute them,
are not taxonomically individualistic. That is, locational width entails tax-
onomic width. This is because two individuals could instantiate different
wide computational systems simply by virtue of differences in the beyond-
the-head part of their cognitive systems. Again, the framework introduced
in Chapters 5 and 6 makes this claim easy to state: Total realizations of
Representation, Computation, and Cognitive Science 167

wide computational systems differing only in their noncore parts, or that
are radically wide, could instantiate the cognitive systems of individuals
who were molecularly identical.
The intuitive idea behind wide computationalism is easy enough to
grasp. But there are two controversial claims central to defending wide

<<

. 27
( 62 .)



>>