Carey’s (2009) theory of conceptual development starts with the assumption that learners are equipped from birth with mechanisms that turn sensory inputs into proto-conceptual representations. These innate systems are called core cognition systems. The proto-conceptual representations produced by core cognition systems have an iconic format, they act continuously throughout life, they are inferentially rich, and they are specific to certain cognitive domains (e.g., numbers, action, cause).
These initial proto-conceptual representations are, in Carey’s picture, just the first step of our conceptual development. Learners very soon build proper conceptual representations that are different in several respects from the ones produced by core cognition system. Carey (2009, p. 22) stresses how these mature conceptual representations are structured in intuitive theories, they are not identifiable by innate mechanisms, they usually have a symbolic format, and they do not act continuously throughout the life of an individual. For mature concepts, then, Carey upholds the so-called theory-theory view of conceptual structure, according to which concepts are structured into semantic networks (i.e., the intuitive theories) of inferential connections, from which they get their semantic content.Footnote 1
Differently from core cognition systems, which are innate and act continuously throughout life, mature concepts are in Carey’s picture learned and they (often) get replaced with time. That is, conceptual development is, according to Carey, discontinuous, i.e., children and adults change multiple times the concepts that they possess. Carey conceptualizes the nature of these conceptual discontinuities in the life of an individual with the concept of incommensurability from philosophy of science (see, inter alia, Kuhn 1962). According to Carey (2009, p. 364), an individual, at different times of her life, enjoys concepts that are incommensurable to each other, in the sense that certain beliefs and inferences connected to one concept cannot be formulated within the resources of the other concept. More exactly, Carey stresses that incommensurability in conceptual development is always local, i.e., it affects only some parts of the semantic networks related to the relevant concepts. Nevertheless, this local incommensurability between incompatible subsets of two semantic networks often results in the relevant concepts having incompatible meanings and references. Just like in the history of science, where a new scientific theory does not often grow cumulatively from an old one, but it replaces instead the old conceptual system with an incompatible one, analogously, Carey argues, children’s ‘intuitive theories’ do not develop cumulatively with the addition of new information to existing concepts, but they often replace old concepts with new, incompatible ones.Footnote 2
Because of these conceptual discontinuities, Carey stresses how the phenomenon of conceptual change must be distinguished from knowledge enrichment and belief revision (Carey 2009, pp. 364-365) since “[in] cases of conceptual change, new primitives are created, whereas belief revision always involves testing hypothesis that are stated in terms of already available concepts” (Carey 2009, p. 520). This brings us to a major contribution of TOOC, namely, its attempt to tackle Fodor’s (1998) circularity challenge to concept learning. Fodor thinks about concept learning as a mere process of hypothesis testing. He argues that formulating the hypotheses to be tested requires having the relevant concepts. As a consequence, he argues that any concept that derives from this learning method is not truly learned because it must be available in the learner’s mind before learning. According to Fodor, this makes the whole idea of concept learning (as a matter of hypothesis formation and testing) circular. TOOC provides a promising response to the challenge that Fodor’s problem poses, i.e., a general learning mechanism by which children and adults learn new concepts from old ones. This learning mechanism is called by Carey Quinean Bootstrapping.
Quinean Bootstrapping, as described by Carey, involves three steps. In the initial step, children start out with a developmentally prior, uninterpreted and purely syntactically structured conceptual system. Let us call this initial conceptual system, which is the input of an episode of bootstrapping, CS1, while, instead, we will refer to the new conceptual system, i.e., the output of an episode of bootstrapping, as CS2. In the first step of bootstrapping, the ‘concepts’ involved in CS2 are initially learned as placeholder symbolic representations without content. For example, when learning to count, each of the symbols “1”, “2”, ... is not associated with a representation of a particular number of objects, but these symbols take the form of empty labels. Nevertheless, at this first stage of the bootstrapping process, children do establish internal (syntactic) relations between the placeholders. In the second step of Quinean Bootstrapping, children acquire a gradual or partial semantic interpretation of CS2 via CS1. According to Carey, children acquire such interpretations by performing repeated mappings between component substructures of the two systems. The mappings may include analogies, abduction, thought experiments and limiting case analysis. Finally, in the third step of Quinean Bootstrapping, children are able to fully interpret CS2 and integrate the newly acquired concept with other concepts in domain-general and coherent ways. Together, the three steps can be summarized as follows:
Summary of the steps involved in Quinean Bootstrapping Step 1:Constructing a syntactic placeholder structure. The learner starts by constructing CS1, which contains uninterpreted and purely syntactically connected placeholder symbolic representations without content.
Step 2:Establishing partial mappings between substructures. The learner establishes a partial semantic interpretation of CS2 by repeatedly mapping component substructures between CS1 and CS2.
Step 3:Completing and choosing the best interpretation. The learner establishes a full semantic interpretation of CS2, the representations of which are inferentially inter-connected (embedded in intuitive theories).
2.2 Critiques and RefinementsSeveral critiques have thrown doubts about the theoretical rigor and success of Carey’s account and, especially, on the viability of Quinean Bootstrapping as a general learning mechanism. We will discuss three different critiques that have been raised against Carey’s account, i.e., the circularity challenge, the deviant interpretation challenge, and the specification challenge. Together with these critiques, we are going to present also some refinements of bootstrapping that have been proposed as solutions to (some of) these critiques. We will then critically assess these critiques and the proposed solutions and we will conclude that there remain some important obstacles to a successful account of bootstrapping.
The first critique to Carey’s Quinean Bootstrapping that we are going to discuss is Fodor’s (1980; 1998; 2010) circularity challenge. We have briefly outlined this challenge in Section 2.1. The key insight of Fodor’s circularity challenge is that insofar as concept learning is a problem of rational inductive inference involving hypothesis testing, the formation of those hypotheses to be tested presupposes the existence of the relevant concepts to be learned. We think that the major issue justifying the popularity of the circularity challenge in cognitive science is the following unanswered question. How can an information-processing system come up with genuinely novel representations, without somehow already having the information that is transferred by manipulating those representations already be encoded, if only implicitly, in the system? This is the version of the circularity challenge that is most pressing for Carey’s account of conceptual development.
Beck (2017) takes Quinean Bootstrapping to be able to meet the circularity challenge thanks to the notion of a computational constraint. This notion, argues Beck, albeit only implicitly used by Carey, is pivotal to the success of her account of conceptual development. Computational constraints are, for Beck (2017, pp. 115-117), implicit or explicit rules for the use of a concept that we acquire in the process of learning a new concept. Beck divides these computational constraints into two categories, internal and external constraints. External constraints are rules governing conceptual use that learners acquire from the external world, such as, for instance, the law \( f= m \times a \), which acts as a computational constraint in our learning of high-school physical concepts. They typically refer to immediate and direct physical, social, and situational influences from the environment that shape cognitive processes in a situation-specific manner (e.g., ambient noise can affect attentional mechanisms and problem-solving) (Gibson 1979; Clark 1997). Internal constraints are, instead, innate rules that govern conceptual learning and inference capabilities of the learners, such as, for instance, the essentialist tendency that, according to some theorists, children exhibit in their conceptual development. These constraints, argues Beck (2017, pp. 118-120), guide children’s interpretation of the symbolic placeholders in an episode of bootstrapping by constraining the possible interpretations of the symbols. By guiding the learners’ acquisition of new conceptual roles, the notion of computational constraint is what allows Quinean Bootstrapping to meet Fodor’s challenge. Carey’s account does not need to presuppose that the concepts that children learn are already somehow stored in their cognition. All that needs to be presupposed is core cognition together with some general rules governing concept learning (i.e., the internal computational constraints, in Beck’s terminology), from which learners, thanks to the outside stimuli and the pivotal action of externally learned rules governing conceptual use (i.e., Beck’s external computational constraints), can learn new conceptual systems.
Although we agree that more attention should be shifted towards the pivotal role of computational constraints in Quinean Bootstrapping, we remain not entirely convinced by Beck’s explanation. Firstly, it remains unclear what exactly a computational constraint amounts to. Furthermore, it remains unclear how appealing to internal and external computational constraints solves the circularity challenge. Beck’s discussion convincingly argues that computational constraints are indeed used by the bootstrapping learners for partially interpreting the placeholders from which, according to the bootstrapping procedure, the new concepts will arise. Yet, how exactly is a learner able to assign a new (meta)semantic role to a given representation? What is the cognitive mechanism through which the learner, guided by the computational constraints, learns a new concept? To solve the circularity challenge, one must answer these two questions.
The second critique we are going to discuss is the deviant interpretation challenge. This challenge concerns the question why we learn natural concepts (e.g., green and blue), as opposed to strange or unnatural concepts (e.g., grue and bleen). If, according to Carey, children learn many concepts via bootstrapping them from innate core cognition, which part of this process makes children learn the natural concepts that they actually learn (and not some other alternative)? This challenge is not specific to bootstrapping, but, as Beck (2017, p. 113) and Pantsar (2021, p. 5796) stress, can be leveled against any inductive theory of concept learning. Yet, some refinements of bootstrapping pointed to a possible solution to this challenge. Recently, Pantsar (2021) proposed a refined account of our bootstrapping of integers number concepts that tackles the deviant interpretation challenge for this important case of concept construction.
Pantsar’s refinement of bootstrapping adds two components to Carey’s and Beck’s accounts. First, he stresses that a plurality of numerical core cognition systems is arguably active in this episode of bootstrapping (cf. Pantsar 2021, pp. 5803-5805). That is, in order to construct integer concepts, learners need to construct a hybrid model of two different core cognition systems (i.e., what we will call in Section 4 the analog magnitude representation system and the parallel individuation system). Secondly, such a bootstrapping process is pivotally fostered and thus constrained in its interpretation, by external cultural factors (cf. Pantsar 2021, pp. 5806-5808). It is thus a process of enculturation of our number cognition, i.e., the interaction with external cultural factors, that influences the development of our number cognition. More specifically, enculturation is a gradual process of cultural learning that shapes cognitive processes over time, involving the internalization of external constraints, cultural and societal norms, values, beliefs, and practices through socialization processes and participation in institutions and traditions, e.g., education, family upbringing or media exposure (Vygotsky 1978; Bruner 1990). In particular, for the case of numbers, socio-cultural factors support children’s engagement in counting routines, allowing them to associate a sequence of meaningless words with new conceptual roles, e.g., to notice that “three” is regularly expressed in light of observations of three objects. According to Pantsar, such an enculturation process, together with the aforementioned construction of a hybrid model of numerical core cognition, makes children learn the natural integers concepts and not some non-standard version of them. We agree with Pantsar that external factors, such as socio-cultural ones, are crucial for explaining why we learn certain concepts and not some others. Enriched with such enculturation-driven considerations, then, the process of bootstrapping can answer the deviant interpretation challenge, at least as well as other inductive theories of conceptual learning.
Finally, the third critique that we are going to focus on is what we call the specification challenge. This amounts to establishing a precise model of how Quinean Bootstrapping works and, thus, to specify all the steps involved in Carey’s outline of this procedure. One notable attempt at explicating Quinean Bootstrapping was presented by Piantadosi et al. (2012), who modeled bootstrapping as governed by Bayesian hypothesis testing: learning a concept means inferring the hypothesis that best maps a label onto the correct concept. For example, according to the authors, the process of learning the concept two can be modeled as the process of learning which hypothesis best maps the label ‘two’ onto a subset representation with cardinality two. The formal framework of Piantadosi et al. involves two ingredients: the first ingredient is a symbolic representational system, i.e., lambda calculus (a formal language for compositional semantics). The second ingredient is a formalization of the learning task in terms of Bayesian inference, used to “determine which compositions of primitives are likely to be correct, given the observed data” (ibid.). Piantadosi et al. claim that their model ‘bootstraps’ number concepts with these ingredients. However, as Carey herself stressed (Carey 2015), Piantadosi et al.’s attempt at modeling Quinean bootstrapping fails to be a genuine model of conceptual learning because it equates learning with mere hypothesis testing, thereby making bootstrapping fall prey to Fodor’s circularity challenge. And indeed this failure of Bayesian models to account for genuine conceptual change traces back to long-standing issues in Bayesian confirmation theory (see, inter alia, Earman 1992). Bayesian models, in fact, typically focus on modest discontinuities that explain only a change in the available expressive power associated with a given conceptual system, while, as we saw in Section 2.1, Carey’s theory focuses on identifying and explaining radical discontinuities, i.e., those involved when altering the total expressive power of a given conceptual system beyond possible combinations of its primitives. This focus on radical discontinuities is not captured in the model of Piantadosi et al., since the model assumes a latent hypothesis space that already determines before learning which hypotheses are possible to infer. In the model, the aspects that do change during learning are not the representational capacities but only how much probability is assigned to a hypothesis mapping an existing candidate concept onto a given label. It hence remains unclear how radical bootstrapping could proceed via Bayesian hypothesis testing.
We saw three critiques to Carey’s bootstrapping account of conceptual development and three proposed refinements of the bootstrapping process. Our assessment is that bootstrapping, when adequately refined and complemented, will be able to meet all the aforementioned challenges in a satisfactory way. In fact, Beck’s insistence on the pivotal role of computational constraints points to a way out of Fodor’s circularity challenge. Moreover, the importance of external cultural factors in constraining the learning of new concepts, highlighted by Pantsar, equips bootstrapping with a viable response to the deviant interpretation challenge. Finally, the idea of modeling bootstrapping within a broadly Bayesian framework championed by Piantadosi et al. (2012) opens a wide array of possible ways of formalizing the bootstrapping process for trying to tackle the specification challenge.
However, despite the virtues of these refinements, we saw that there remain serious doubts concerning the circularity and the specification challenges. In particular, the common stumbling block for a satisfactory solution of these two challenges seems to be the vagueness of the cognitive processes involved in the partial interpretation of a new conceptual system by a given learner and in the related pivotal step of creating new metasemantic roles with the help of computational constraints. The specific steps of the cognitive processes involved in bootstrapping need to be properly explicated before we can explain why bootstrapping offers genuine concept learning and before we can formalize this process.
Comments (0)