<<

. 28
( 62 .)



>>

computationalism as a viable model for thinking about and studying cog-
nitive processing. The ¬rst is that it is sometimes appropriate to offer a
formal or computational characterization of an organism™s environment,
and to view parts of the brain of the organism, computationally charac-
terized, together with this environment so characterized, as constituting
a uni¬ed computational system. Without this being true, it is dif¬cult to
see wide computationalism as a coherent view. The second is that this
resulting mind-world computational system itself, and not just the part
of it inside the head, is genuinely cognitive. Without this second claim,
wide computationalism would at best present a zany way of carving up
the computational world, one without obvious implications for how we
should think about real cognition in real heads. I take each claim in
turn, with speci¬c reference to Marr™s theory of vision, and with an eye to
highlighting some of the broader issues that arise in thinking about in-
dividualism and cognitive science, including how we think about mental
representation.
Offering a formal or computational characterization of an organism™s
environment, if it is to capture important aspects of the structure and
dynamics of that environment, is neither trivial nor easy. Or, to put it in
terms that help to locate whatever mystery there is to this claim within per-
haps more familiar territory for those working in the cognitive sciences,
doing so is no more and no less trivial than doing so for psychological
states themselves.
To construct a computational model of an internal, psychological pro-
cess, one postulates a set of primitive states, S1 . . . Sn , and then formulates
transition rules that govern changes between these states, as well as some
set of initial states to which the transition rules apply in the ¬rst instance.
The computational model™s adequacy is proportional to how closely its
primitives, transition rules, and starting point(s) parallel aspects of the
corresponding cognitive system being modeled. The crucial assumption
in computational modeling is that causal transitions between physical
states can be represented as inferential transitions between computa-
tional states.
This view of what a computational system of mental states is has
been elaborated by Rob Cummins as the “Tower-Bridge” picture of
Individualism and Externalism in Cognitive Sciences
168

computation, and is, I think, implicit in Egan™s function-theoretic concep-
tion of computational psychology. It applies not only to “classic” models of
computational cognition but also to connectionist models, at least those
in which there remain notions of computation and representation. In the
former, the computational states tend themselves to be rich in structure
and thus differentiated, and often approximate everyday concepts, with
the transition rules serving both to “unpack” the complexity to these sym-
bols (for example, “dog” “> “animal”) and the relations between symbols
more generally. In the latter, the computational states tend to be simple,
relatively undifferentiated, and they do not correspond readily to every-
day concepts. Here the transition rules are connection strengths between
the computational states (the “nodes”), and the dynamics of the system
is governed primarily by these together with the initial layering of the
nodes.29
This basic idea is general enough that it applies not only to cogni-
tive systems but in principle to any type of system whose structure and
dynamics we wish to model. This is a desirable feature, since (i) the no-
tion of computation that we apply to cognitive systems should not be sui
generis, applying only to such systems; and (ii) a variety of noncognitive
systems “ from planets to ecosystems to colonies of social insects to in-
tracellular biological systems “ can be computationally modeled. What
marks off cognition as special here is not the notion of computation that
it employs but the idea that the cognitive system itself and its components
are themselves computing, and not just being computationally modeled.
This idea itself is manifest in talk of “neural computing,” “single-neuron
computation,” and the “computational brain,” as well as in the Ur-idea
of the “mind as computer.” If I have correctly characterized the idea of
computational modeling, however, then the idea that cognition is dif-
ferent in kind from all (or even the vast majority of) other domains to
which computational modeling is applied “ that is, that, plus or minus a
bit, only cognition is genuinely computational in and of itself “ involves
mistaking the features of the model for the features of what is modeled.
Given this notion of computation, the idea that there can be com-
putational systems that involve the nonbrain part of the world is trivial.
Less trivial is the claim that the brain plus parts of the nonbrain part of the
world together can constitute a computational system, a locationally wide
computational system, since that rests on there being a robust, structured
causal relationship between what is in the head and what is outside of it
that can be adequately captured by transition rules. As I said in Chapter 5,
processes in the brain, or more generally the organism, that have evolved
Representation, Computation, and Cognitive Science 169

via world-mind dependencies are good candidates for forming parts of
wide computational systems, assuming that the nonorganismic contribu-
tion to this system itself has some sort of rich causal structure that admits
a formal characterization.
Perceptual processing is a good candidate place to look for such sys-
tems. A lens that transforms the light that passes through it does not itself
compute the spatial frequency of the resulting image. But there is a causal
process whose inputs (target object) and outputs (resulting image) we
can characterize formally (in terms of spatial frequency), and that in-
volves the lens as a mediating causal mechanism. This mechanism, and
those that it feeds, can exploitatively represent objects in the world. The
computation here is wide, since the computational states extend beyond
the boundary of the relevant individual, the lens. The same is true if that
lens happens to form part of somebody™s eye.
How might this apply to Marr™s theory of vision? As we have seen, Marr
himself construes the task of a theory of vision to show how we extract
visual information from “arrays of image intensity values as detected by the
photoreceptors in the retina.” Thus, as we have already noted, for Marr
the problem of vision begins with retinal images, not with properties of
the world beyond those images, and “the true heart of visual perception
is the inference from the structure of an image about the structure of
the real world outside.” Marr goes on to characterize a range of physical
constraints that hold true of the world that make this inference possible,
but he makes it clear that “the constraints are used by turning them
into an assumption that may or may not be internally veri¬able.” For
all of Marr™s talk of the importance of facts about the beyond-the-head
world for constructing the computational level in a theory of vision, this
is representative of how Marr conceives of that relevance. It seems to me
clear that, in terms that I introduced in the previous sections, Marr himself
adopts an encoding view of computation and representation, rather than
an exploitative view of the two. The visual system is, according to Marr, a
locationally individualistic system.30
Whatever Marr™s own views here, the obvious way to defend a wide
computational interpretation of his theory is to resist his inference from
“x is a physical constraint holding in the world” to “x is an assumption
that is encoded in the brain.” This is, in essence, what I have previously
proposed one should do in the case of the multiple spatial channels
theory of form perception. Like Marr™s theory of vision, which in part
builds on this work, this theory has usually been understood as postulating
a locationally individualistic computational system, one that begins with
Individualism and Externalism in Cognitive Sciences
170

channels early in the visual pathway that are differentially sensitive to four
parameters: orientation, spatial frequency, contrast, and spatial phase.
My suggestion was to take seriously the claim that any visual scene (in
the world) can be decomposed into these four properties, and so see
the computational system itself as extending into the world, with the
causal relationship between stimulus and visual channels itself modeled
by transition rules. Rather than simply having these properties encoded
in distinct visual channels in the nervous system, view the in-the-head
part of the form perception system as exploiting formal properties in the
world beyond the head. In Marr™s theory, there is one respect in which
this wide computational interpretation is easy to defend, and one respect
in which it is dif¬cult to defend.31
The ¬rst of these is that Marr™s “assumptions,” such as the spatial coin-
cidence assumption and the “fundamental assumption of stereopsis,” typ-
ically begin as physical constraints that re¬‚ect the structure of the world;
in the above examples, they begin as the constraint of spatial localiza-
tion and three matching constraints. Thus, the strategy is to argue that
the constraints themselves, rather than their derivative encoding, play a
role in de¬ning the computational system, rather than simply ¬lling a
heuristic role in allowing us to offer a computational characterization of
a locationally individualistic cognitive system.32
The corresponding respect in which a wide computational interpreta-
tion of Marr™s theory is dif¬cult to defend is that these constraints them-
selves do not specify what the computational primitives are. One possi-
bility would simply be to attribute the primitives that Marr ascribes to the
image to features of the scenes perceived themselves, but this would be
too quick. For example, Marr considers zero-crossings to be steps in a
computation that represent sharp charges in intensity in the image, and
while we could take them to represent intensity changes in the stimuli in
the world, zero-crossings themselves are located somewhere early in the
in-the-head part of the visual system, probably close to the retina.
A better strategy, I think, would be to de¬‚ate the interpretation of
the retinal image and look “upstream” from it to identify richer external
structures in the world, structures, which satisfy the physical constraints
that Marr postulates. That is, one should extend the temporal dimension
to Marr™s theory so that the earliest stages in basic visual processes begin
in the world, not in the head. Since the study of vision has been largely
conducted within an overarching individualistic framework, this strategy
would require recasting the theory of vision itself so that it ranges over a
process that causally extends beyond the retinal image.
Representation, Computation, and Cognitive Science 171

Mark Rowlands has contrasted Marr™s approach, beginning with the
retinal image in his analysis of vision with the ecological approach of
James J. Gibson, which begins with information contained in what Gibson
called the ambient optical array. Although Rowlands locates the views of
Marr and Gibson on a continuum, his chief aim in drawing the contrast is
to argue for a view of perception closer to Gibson™s end of the continuum
than to Marr™s. My idea here is somewhat different: to suggest that we can
augment Marr™s computational view by something like Gibson™s view of
what the starting point for an analysis of vision is. The importance of the
idea that the ambient optical array, as an information-bearing structure
external to the organism, is the appropriate starting point for this analysis,
as Rowlands argues, is that doing so enriches the information available for
visual processing and so reduces the complexity that we need to attribute
to the organism itself in accounts of vision. The ambient optical array
is part of the locationally wide computational system that vision takes
place in.33
Once we take this step, then the interaction between information-
processing structures inside organisms and information-bearing states
outside of them becomes central to a computational account of vision.
And there seems a second Gibsonian insight that can direct research here:
The idea that vision is exploratory, and that crucial to that exploration
is the movement of the organism through the ambient optical array.
Rowlands thinks of this as an organism manipulating its environment to
extract information from it, whereas it seems to me that it is more fruitful
to see this as an organism manipulating itself, in particular, its body and
its parts, for this purpose. This is one way in which vision is animate,
a point to which I shall return and a second respect in which we can
extend the temporal dimension to Marr™s theory. Visual inputs are not
simply snapshots that are then (somehow) assembled and bound together
within the organism in generating complete visual scenes. Rather, they
are complete visual scenes sampled through bodily explorations of the
ambient optical array over time.
If we view these aspects of Gibson™s views as extensions of the temporal
dimension to Marr™s theory, then we might also wonder about revisiting
the modular dimension to Marr™s theory. As I said near the beginning
of section 4, a second complexity to Marr™s theory is the assumption
that visual computations are highly modular, such that features of the
¬nal visual image, such as depth and motion, are computed indepen-
dently. But if the information available to wide computational systems
is enriched in the temporal dimension relative to that available to their
Individualism and Externalism in Cognitive Sciences
172

narrow counterparts, then surely there is less need to assume that vision
is as modular as Marr himself assumed.
The second task “ of showing that wide computational systems them-
selves, and not just their in-the-head components, are cognitive systems “
is perhaps better undertaken once we have a range of examples of wide
computational systems before us. Before turning to that, now that we have
explored the debate over Marr™s theory in some detail, I want to return
to the idea of narrow content.


7 narrow content and marr™s theory
Consider the very ¬rst move in Segal™s argument for the conclusion that
Marr™s theory of vision is individualistic: The innocuous-looking claim
that there are two general interpretations available when one seeks to as-
cribe intentional contents to the visual states of two individuals, one “re-
strictive” (Burge™s) and one “liberal” (Segal™s). In introducing the distinc-
tion between narrow content and wide content in Chapter 4, I indicated
that something like these two general alternatives were implicit in the ba-
sic Twin Earth cases with which we “ and the debate over individualism “
began. There I also said that the resulting idea “ that twins must share
some intentional state about watery substances (or about arthritislike dis-
eases, in Burge™s standard case) “ is the basis for attempts to articulate a
notion of narrow content, that is, intentional content that does supervene
on the intrinsic, physical properties of the individual.
The presupposition of a liberal interpretation for Marr™s theory, and a
corresponding view of the original Twin Earth cases in general, are them-
selves questionable. Note ¬rst that the representations that we might, in
order to make their disjunctive content perspicuous, label “crackdow”
or “water or twater,” do represent their reliable, environmental causes:
“Crackdow” is reliably caused by cracks or shadows and has the content
crack or shadow; similarly for “water or twater.” But then this disjunctive
content is a species of wide, not narrow, content. In short, although being
shared by twins is necessary for mental content to be narrow, this is not
suf¬cient for narrow content.
To press further, if the content of one™s visual state is to be individu-
alistic, it must be shared by doppelg¨ ngers no matter how different their
a
environments. Thus, the case of twins is merely a heuristic for thinking
about a potentially in¬nite number of individuals. But then the focus on a
content shared by two individuals, and thus on a content that is neutral be-
tween two environmental causes, represents a misleading simpli¬cation
Representation, Computation, and Cognitive Science 173

insofar as the content needed won™t simply be “crackdow” but something
more wildly disjunctive. This is because there is a potentially in¬nite num-
ber of environments that might produce the same intrinsic, physical state
of the individual™s visual system as (say) cracks do in the actual world. It is
not that we can™t simply make up a name for the content of such a state “
we can: Call it “X.” “ Rather, it is that it is dif¬cult to view a state so indi-
viduated as being about anything. And if being about something is at the
heart of being intentional, then this calls into question the status of such
narrowly individuated states as intentional states.
Segal has claimed that the narrow content of “crackdow,” or by implica-
tion “water or twater,” need not be disjunctive, just simply more encom-
passing than, respectively, crack or water. But casting the above points
in terms of disjunctive content simply makes vivid the general problems
that (1) individuation of states in terms of their content still proceeds via
reference to what does or would cause them to be tokened; and (2) once
one prescinds from a conception of the cognitive system as embedded
in and interacting with the actual world in thinking about how to tax-
onomize its states, it becomes dif¬cult to delineate clearly those states
as intentional states with some de¬nite content. As it is sometimes put,

<<

. 28
( 62 .)



>>