Povinelli et al. on "Reinterpretation"

« previous post | next post »

In yesterday's "AI deception?" post, I proposed that we ought to apply to AI an analogy to the philosophical evaluation of "theory of mind" issues in animals. And one of the clearest presentations of that evaluation is in Daniel Povinelli,  Jesse Bering, and Steve Giambrone, "Toward a science of other minds: Escaping the argument by analogy" (2000). You should read the whole thing — and maybe look through some of the many works that have cited it. But today I'll just present some illustrative quoted passages.


A central assumption of cognitive science is that mental states play a causal role in generating the behavior of most encephalized biological organisms. But the cognitions of humans, at least, include more than first-order emotions, desires, plans, beliefs, and such—we also reason about these states and processes. Premack and Woodruff (1978) coined the term “theory of mind” to refer to this capacity. “Such a system,” they observed, “may properly be viewed as a theory because such [mental] states are not directly observable, and the system can be used to make predictions about the behavior of others” (p. 515). Indeed, core aspects of this system of second- (and higher-) order representations may be a more or less universal feature of human cognition (Povinelli & Godfrey, 1993; see Lillard, 1998, for a review). In this essay, we examine two questions about the seemingly universal aspects of theory of mind. First, what causal role do our second-order representations of mental states play in generating our behavior? Second, are we alone in possessing such a theory of mind?

Philosophers have formulated answers to both of these questions using various a priori arguments, but perhaps the most pervasive of these is the argument by analogy. This argument assumes that we can know which mental states produce which of our behaviors through introspection— or at least something very much like introspection (e.g., Russell, 1948). Thus, the argument asserts, we are justified in postulating specific mental states in other species by analogy to ourselves. That is, if we know that mental state x causes behavior y in ourselves, then we are on firm ground in inferring mental state x in another species to the extent that it exhibits behavior y (Hume, 1739–1740; Romanes, 1882, 1883). In this essay, we critically examine some common assumptions about the role that second-order mental states play in generating the behavior of human and nonhuman primates. Some of these assumptions are explicit in the argument by analogy, whereas others simply appear to follow from it. We show that the argument by analogy fails to recognize the complexity of social behavior that can be generated by first-order intentional states—as evidenced by recent empirical research, which we discuss in some detail.


At this point, our general reader may be puzzled. How is it, they will wonder, that chimpanzees—especially chimpanzees!—can exhibit the remarkably sophisticated social behaviors so eloquently described by Jane Goodall (1971, 1986), Frans de Waal (1982, 1989, 1996) and others, without possessing at least an inkling of others as psychological agents? After all, the social world of primates is one in which dominance status, recent positive or negative interactions, and complicated and shifting alliances all play major roles in determining what should be done next. To wit, how could it be that nonhuman primates deceive and manipulate each other (e.g., de Waal, 1986; Byrne & Whiten, 1985; Whiten & Byrne, 1988) if they do not represent each others’ beliefs? Furthermore, how could chimpanzees share with us so many of these social behaviors, down to the finest level of detail, and yet interpret them so differently? If we were to reply that these animals just learn, through trial and error, that certain behaviors lead to certain consequences, the general reader would remain deeply unsatisfied. First, such an explanation seems to involve a double-standard: The exact same behaviors are to be explained in different ways depending solely on whether they are performed by humans or by other primates. Second, such a simplistic account seems to fly in the face of the reality of our close common ancestry—is there not some biological doctrine that could be invoked to bolster the probability that when two species are closely related, similar behavior must be attended by similar psychological causes?


[O]ur reinterpretation hypothesis proposes that the majority of the most tantalizing social behaviors shared by humans and other primates (deception, grudging, reconciliation) evolved and were in full operation long before humans invented the means for representing the causes of these behaviors in terms of second-order intentional states. In this sense, our reinterpretation hypothesis may be the evolutionary analog of Annette Karmiloff-Smith’s (1992) concept of ‘representational redescription,’ which she posits as a major driving force in human cognitive development. Her proposal envisions a process in development whereby information implicitly in the mind is progressively recoded at increasingly explicit levels both within and across domains in ways that make this information increasingly available to the mind. One interpretation of our hypothesis is that humans have uniquely evolved the psychological mechanisms that allow for the most abstract levels of representational redescription (Karmiloff-Smith, 1992). But then what causal role is left for second-order intentional states? In our view, the highest level psychological descriptions of behaviors do not necessarily directly prompt the behavior they attend. To be sure, in some cases they may do so, but in many other cases they may serve to regulate behavior at a higher level of hierarchical description. In many cases, however, they may merely be convenient (and useful) ad hoc descriptions of our behaviors — behaviors that both can and do occur without such descriptions.


Note the analogy between those "reinterpetation" issues and the question of whether LLM AI systems (and humans) "understand" the logic of their complex associative patterns.



3 Comments »

  1. AntC said,

    June 11, 2024 @ 5:35 am

    the behavior of most encephalized biological organisms

    This just in: Elephants call each other by name, study finds

    Caveat:

    Using a machine-learning algorithm, the [researchers] identified 469 distinct calls, which included 101 elephants issuing a call and 117 receiving one.

    … suggests that elephants and humans are the only two animals known to invent “arbitrary” names for each other, rather than merely copying the sound of the recipient [– unlike those mischievous parrots or dolphins].

    Seems to be not the usual journalistic hype of 'animals can speak'.

  2. Victor Mair said,

    June 11, 2024 @ 8:18 am

    Speaking of which:

    ==============

    Do plants have minds?

    In the 1840s, the iconoclastic scientist Gustav Fechner made an inspired case for taking seriously the interior lives of plants

    https://aeon.co/essays/can-we-see-past-our-soul-blindness-to-recognise-plant-minds

    ===============

    I would say so only in a metaphorical sense.

    But plants do have senses, responses, tropisms, expressions, etc.

  3. Jon said,

    June 12, 2024 @ 1:08 am

    Thanks for the link to the paper. A clearly-written, fascinating study.

RSS feed for comments on this post · TrackBack URI

Leave a Comment