Institut Jean Nicod

Accueil > Séminaires/Colloques > Prix Jean Nicod > Conférences et Prix Jean-Nicod 2021 > Jean-Nicod Lectures and Prize 2021



Conférences et Prix Jean-Nicod 2021

 

 

Frances Egan est née à Belfast, en Irlande du Nord, et a grandi au Canada. Elle a obtenu un BA et un MA en philosophie de l’Université du Manitoba et un doctorat en philosophie des sciences de l’Université de Western Ontario en 1988. Elle a enseigné toute sa carrière à l’université Rutgers, dans le New Jersey. Elle a été boursière de recherche au Center for Interdisciplinary Research (ZiF) de l’Université de Bielefeld en Allemagne, à l’Institute for Advanced Studies de l’Université hébraïque de Jérusalem et au Center for Mind, Brain, and Cognitive Evolution de l’Université de la Ruhr à Bochum en Allemagne. Elle a publié de nombreux articles sur des questions de philosophie de l’esprit et de psychologie et sur les fondements des sciences cognitives.

 

 

Introduction de Frances Egan par le philosophe de l’esprit Pierre Jacob, directeur de recherche émérite au CNRS

               

It’s a great pleasure to introduce today’s speaker and the recipient of the Jean-Nicod prize for 2021 : Frances Egan.

She was trained in Canada and got her PhD in 1988 from the university of Western Ontario in London. Since 1990 she has held a teaching position in the philosophy department at Rutgers University, in New Brunswick (New Jersey) in the USA.

Frankie Egan is a philosopher of the cognitive sciences and she teaches in one of the leading departments for the philosophy of the cognitive sciences. Her work lies right at the foundational interface between the philosophy of mind and the philosophy of science. Her major question has been and still is : what is the role of representational content in computational models of cognitive capacities and cognitive processes ? Arguably this question arises only if one accepts jointly a representationalist picture of cognition and a computational conception of mental processes.

Two prime examples of a computational representational approach to cognitive capacities have been Noam Chomsky’s approach to the human language faculty and David Marr’s approach to early vision, to which Egan has devoted much of her philosophical attention. Moreover, Marr has famously drawn a tripartite distinction between three levels of organization of a physical device that performs computations e.g. an adding machine. The highest level is constituted by the definition of what is being computed, e.g. the addition function, which takes pairs of integers as arguments and yields their unique sum as the value of the function. The second level is the algorithmic level, i.e. the specification of the particular procedure that computes each particular value of the function from each particular pair of arguments in a finite sequence of steps. Finally, there is the implementation or hardware level.

So, on a representationalist picture of cognition and a computational conception of mental processes, mental representations are construed as neural vehicles that can serve as input to, and output of, computations.

As mental representations, these vehicles presumably have content. Could content itself play an explanatory role in computational models of cognitive capacities and processes ? This is and has been over the years Frankie Egan’s basic question.

She offers what she calls a deflationary answer to this question. Her answer is meant to stand as an intermediate third option between full-blooded intentional realism and content-eliminativism and has become more and more influential in recent philosophy of cognitive science.

Jerry Fodor (the first Jean-Nicod lecturer in 1993) was unquestionably the arch-representative of intentional realism. Brentano, who was an intentional realist, and Quine who was not, famously agreed that intentional realism is incompatible with ontological materialism. Fodor thought that the tension between materialism and intentional realism calls for the naturalization of content. And so did Fred Dretske, the second Jean-Nicod lecturer. Arguably only if mental representations have a determinate content does it make sense to try to naturalize it. Moreover, Fodor assumed that the contents of an agent’s propositional attitudes (her beliefs and desires) reduce to the meanings of symbols in the language of thought. So, his aim was to naturalize the meanings of symbols in the language of thought.

Quite unexpectedly perhaps, Noam Chomsky (one of the main architects of the Cognitive Revolution and one of the sharpest critics of behaviorism) has turned out to be an advocate of content-eliminativism. In some of his discussions of David Marr’s computational approach to vision, Chomsky has dismissed as scientifically meaningless (if not simply meaningless) any question about the content of the internal representation of a person seeing a cube (in some specific experimental conditions). On his view, there seems to be no scientific room for a notion of representational content in computational models of cognitive capacities.

Again, consider the questions whether neural vehicles in an agent’s brain have determinate contents ? And if they do, could they play an explanatory role in computational models of cognitive capacities ?

For Egan, I think, there is no straightforward answer to either question because on her view, there are at least two kinds of contents, not one, and both are putatively relevant to the success of computational models of a cognitive capacity, e.g. vision : there is mathematical content and there is cognitive content.

While the former is entirely fixed by the computational theory in the narrow sense, the latter is relative to explanatory contexts other than building computational models as such.

Mathematical content is primarily specified by the components of the computational theory in the narrow sense, including the mathematical function being computed, the algorithms and the structures used in its computations by the cognitive mechanism. For example, Marr’s theory of early vision purports to explain edge detection, in part, by positing the computation of the Laplacean of a Gaussian of the retinal array. This is what Egan calls the function-theoretic characterization (in the mathematical sense of ‘function’) of Marr’s computational theory in the narrow sense.

However, Frankie does agree that the inputs and outputs of computations are open to further interpretation, in terms of the properties and relations instantiated by particular individuals in the environment in which the visual system works and evolved.

What she denies is that such an interpretation is built into the computational model itself or the computational theory in the narrow sense. For example, the early stages of the visual system are said to compute edges. On Frankie’s view, EDGE is part of the domain-specific cognitive content attributed to the computations of the visual system ; it is not part of the mathematical content specified by the computational theory proper. Nor is it fixed by the components of the computational theory in the narrow sense.

The attribution of cognitive content depends on a variety of pragmatic factors. In fact, on Frankie’s view, cognitive content is part of what she calls the commonsense intentional gloss of the computational theory proper. The main function of the intentional gloss is to bridge the gap between the computational models of sub-personal cognitive mechanisms and their manifest image, i.e. our commonsense personal understanding of what these mechanisms can achieve.

While it is clear that Frankie’s deflationary account stands in sharp contrast to Fodor’s hyper-representationalist or intentional realism, it also stands in sharp contrast with the view of Tyler Burge (an earlier Jean-Nicod lecturer), who argued that Marr’s computational theory presupposes an intentional construal of the contents of veridical states of the visual system.

Now before closing this introduction, I would like to raise what I take to be a pair of deep and interesting questions for Frankie’s deflationary account. Granted, both Fodor’s and Burge’s stances are likely to count as inflationary intentional realist positions.

The first question is : what exactly is a deflationary account of the role of content in cognitive scientific explanations committed to ? To what extent is Frances’s deflationary account an alternative to Chomsky’s content-eliminativism ?

Frankie writes somewhere that “Chomsky is wrong to conclude that content plays no explanatory role in computational cognitive models.” But Chomsky is willing to grant content an auxiliary role in the informal or intuitive presentation of the computational theory. Now Frankie also thinks that cognitive content is not part of the computational model proper, but that there is room for an intentional gloss of computational models. So, the following interesting question arises : to what extent does Frankie really disagree with Chomsky ?

Another really deep and intriguing question is raised, I think, by one feature of Frankie’s deflationary account, which I did not yet mention explicitly. In addition to the computational component in the narrow sense, Frances posits also what she calls an ecological component, which belongs to the computational theory proper in a broad sense. As she puts it, “only in some environments would computing the Laplacian of a Gaussian help an organism to see” (as opposed to doing something else). It is, I think, a deep question whether Frankie’s deflationary account allows her to locate the ecological component within the computational theory proper in the broad sense rather than as part of the commonsense intentional gloss.

With these questions in mind, I am really eager to listen to her four lectures, the first of which is entitled : “Representation in Computational Cognitive Science.”

Pierre Jacob

 

Philosophers of mind tend to hold one of two views about the existence of mental representations : they are either robustly realist about representations, taking them to have objective reality independent of theorists’ explanatory interests, or they embrace some form of eliminativism. I develop and defend a distinctive ‘third way’, arguing that attributions of content to mental states do not pick out an essential property of mental states, but instead serve various important pragmatic and explanatory purposes. Mental content attributions are best understood as pragmatically motivated glosses.

 

 

Representation in Computational Cognitive Science

Remise du Prix Jean-Nicod et cocktail après la conférence

Mardi 16 novembre - 14h30

Ecole normale supérieure, Salle Dussane, 45 rue d’Ulm, 75005 Paris

Much of cognitive neuroscience traffics in representation talk. Computational theories of vision, for example, posit structures that are described as representing edges in the world. Neurons are said to represent elements of their receptive fields. Despite the widespread use of representation talk in computational theorizing there is surprisingly little consensus about how such claims are to be understood. Is representation talk to be taken literally ? Is it just a useful fiction ? I sketch an account of the nature and function of representation in computational cognitive models that rejects both of these views while acknowledging that there is an element of truth in each. According to the deflationary view I defend, representational content serves several important pragmatic purposes, most notably connecting formal computational accounts to cognitive explananda with which we are pretheoretically familiar.

 

Naturalizing the Mind without Naturalizing Intentionality

Vendredi 19 novembre - 14h

Ecole normale supérieure, Salle des Actes, 45 rue d’Ulm, 75005 Paris

Computational cognitive science aims to provide a naturalistic foundation for theorizing about mental states. It can provide such a foundation only if it makes no essential reference to meaning or content or to intentional processes such as understanding. Philosophers typically assume that computational theories characterize mental states in intentional terms, and they have undertaken to demonstrate how to discharge the commitment to intentionality, by specifying non-intentional and non-semantic sufficient conditions for a mental state to have a determinate content. So far, this so-called naturalization project has not met with success. I argue that computational theories do not need naturalization since they make no essential appeal to meaning or content. Nonetheless, an adequate theory of cognition must maintain contact with the way that we see ourselves as intentional agents. I explain how glossing computational states in representational terms provides the crucial link between the mechanical processes posited in the computational theory and the manifest personal-level, rational capacities that the theory attempts to explain. I argue that in thus reconceiving the project of naturalizing the mind I sketch a more realistic alternative to the traditional naturalization project, one that computational theories are well-placed to satisfy.

 

Belief and its Linguistic Representation

Mardi 23 novembre - 14h

Ecole normale supérieure, Salle Dussane, 45 rue d’Ulm, 75005 Paris

In the last two lectures I argue that beliefs (lecture 3) and perceptual experiences (lecture 4) are not relations between subjects and mental representations of some sort. I argue that they are rather to be understood as monadic properties of subjects that are modelled by aspects of external reality. Beliefs, in particular, are modelled as linguistic objects that have syntax as well as content and truth conditions. That is to say, we gloss beliefs as being linguistic objects with such properties, though they do not actually have them. Rather, they have properties which have linguistic properties as images. This scheme for modelling beliefs in linguistic terms has important pragmatic virtues, especially enabling the prediction, regulation, and explanation/rationalization of behavior. 

 

Perceptual Experience

Vendredi 26 novembre - 14h

Ecole normale supérieure, Salle des Actes, 45 rue d’Ulm, 75005 Paris

In the philosophy of perception, representationalism is the view that all phenomenological differences among mental states are representational differences, in other words, differences in content. In the final lecture I defend an alternative view which I call external sortalism, inspired by traditional adverbialism, and according to which experiences are not essentially representational. The central idea is that the external world serves as a model for sorting, conceptualizing, and reasoning surrogatively about perceptual experience. On external sortalism, contents once again are construed as a kind of gloss. We can retain what is attractive about representationalism, namely, that perceptual experiences can be evaluated for accuracy, without problematic commitment to the idea that they bear a substantive, representational relation to external objects and properties and that this relation determines the phenomenal character of experience.

 

 

 

 

MESURES SANITAIRES : Les conférences seront soumises aux mesures sanitaires en vigueur :

  • Le Pass Sanitaire à jour obligatoire (pour plus d’information, aller sur le site du gouvernement)
  • Port du masque obligatoire

 

 

Sélection bibliographique

  • 2020 - “A Deflationary Account of Mental Representation” in What are Mental Representations ?, J. Smortchkova, K. Dolega, and T. Schlicht, (eds.), Oxford University Press, 26-53.
  • 2017 - “Function-Theoretic Explanation and the Search for Neural Mechanisms,” in Explanation and Integration in Mind and Brain Science, David M. Kaplan (ed.), Oxford University Press, 145-163.
  • 2014 - “How to Think about Mental Content,” Philosophical Studies 170, 115-135.
  • 2012 - “Metaphysics and Computational Cognitive Science : Let’s Not Let the Tail Wag the Dog,” The Journal of Cognitive Science 13, 39-49.
  • 2012 - “Representationalism,” in The Oxford Handbook of Philosophy of Cognitive Science, E. Margolis, R. Samuels, and S. Stich, eds., Oxford University Press, 250-72.
  • 2003 - “Naturalistic Inquiry : Where Does Mental Representation Fit In ?,” in Chomsky and His Critics, L. Antony and N. Hornstein, eds., Blackwells, 89-104.
  • 1995 - "Computation and Content," The Philosophical Review, 104, 181-203.
  • 1992 - "Individualism, Computation, and Perceptual Content," Mind, 101, 443-459.

EHESSCNRSENS