Cognate Society Session10:15 AM - 11:45 AM (America/New_York) 2021/11/11 15:15:00 UTC - 2021/11/11 16:45:00 UTC
An explanation conveys content according to a structure. For example, as a child I was preoccupied with this question: why does your reflection in the mirror seem reversed from left-to-right and not from top-to-bottom? I might now answer by describing how we are attentive to some ways of moving our bodies more than others, and then linking this fact to the experience of looking in a mirror. In this explanation, I’ve conveyed an answer to the question in a particular way: through a general fact about perception of motion, combined with a specific story bridging between the general fact and the case at hand. This latter element is the explanatory structure: the bones of an explanation, so to speak. I might have answered the mirror question with a different structure: providing an explanation that relies crucially on visualization by acting out the different ways of “stepping into” the mirror. These two explanations may even in some sense provide the same answer, but through different means. My project explores cases where more than one explanatory structure is at work, treating these structures as members of a broader family of cognitive structures. What does it mean for structure, rather than contents, to differ, and how do structures of different types interact in learning? I focus on cases where a single learning problem is solved with more than one structure, where this means not merely two different contents but two ways of presenting content that differ with respect to the way they organize information. I'll present two cases of contrasting structures, from the domains of explanation and memory, using both experimental and theoretical techniques to ground a hypothesis about what these structures are and why we might need more than one of them. (1) In response to why-questions, people offer both narrative and abstract explanations. These two structures, I'll argue, work together to allow us to understand and communicate - and data from adult learners suggests that both structures are in some sense equally explanatory. (2) In memory, spatial (and spatio-temporal) map-like structures have been posited to extend to all kinds of knowledge domains beyond the literally spatial. But what is lost when we extend the concept of a map this far? In both cases, an experiential, temporal structure works alongside a more allocentric, semantic structure. But this symmetry papers over deeper differences in function. On my view, the narrative/abstract dichotomy in explanation, and the episodic/semantic one in memory display the same general trade-off between flexibility and strength of structure, though they differ structurally in many significant ways. Putting these cases together, we see that even simple learning problems are not best solved by finding the "right" structure, but instead require a more complex array of structures. This is a clue to the utility of multiple, distinct, explanatory structures.
Cognate Society Session10:15 AM - 11:45 AM (America/New_York) 2021/11/11 15:15:00 UTC - 2021/11/11 16:45:00 UTC
Mathematics has long been a source of philosophical puzzlement. On the one hand, the truth of mathematical statements, such as (1), would seem to require the existence of mathematical objects: (1) There are prime numbers less than 10. On the other hand, such objects —if they exist— seem very strange by common standards: apparently acausal and lacking spatiotemporal location. Mathematical fictionalists respond to the above predicament by accepting the conditional that if mathematical statements are true, then mathematical objects exist, while also denying that such objects exist and, hence, that mathematical statements are true. In this talk we discuss variants of fictionalism – hermeneutic mathematical fictionalisms (HMFs) – which incur commitments regarding the psychological attitudes expressed by speakers when uttering mathematical statements like (1). To a first approximation, all HMFs maintain that when speakers utter mathematical statements, they seldom, if ever, express prototypical beliefs of the sort expressed by assertions of empirical fact. On this view, for example, an utterance of (1) seldom, if ever, expresses the belief that there are prime numbers less than 10. Some versions maintain that another attitude – sometimes called ‘acceptance’ -- is expressed by utterances like (1). Others suggest that the attitude expressed is belief, but that the contents of the belief diverges from the apparent meaning of the mathematical statement. Crudely put, it is in some way figurative, fictive, or otherwise non-literal. Crucially, all such views incur empirical commitments regarding the psychological attitudes of speakers. Given that HMFs incur such commitments, one would expect it to yield broadly behavioral predictions that diverge from those associated with the hypothesis that utterances of mathematical statements express literal belief. In view of this, we explore the extent to which some variants of HMF make such contrastive predictions. In doing so, we first note that some views appear too vague to yield determinate predictions. We then present some preliminary empirical results regarding more perspicuous versions of HMF to see whether their predictions are borne out. We conclude by suggesting that this preliminary study may provide a template for studying other views in the philosophy of science which incur commitments regarding psychological attitudes. Specifically, we suggest that analogous research may be relevant to some familiar issues in the literature on scientific realism.
Engaging mental models in prediction and explanation to support learning in early childhood
Cognate Society Session10:15 AM - 11:45 AM (America/New_York) 2021/11/11 15:15:00 UTC - 2021/11/11 16:45:00 UTC
Multiple accounts of human learning exist, each suggesting multiple routes for encoding of information. One often contrasted aspect of learning involves whether the route involves “bottom-up” associative mechanisms, or top-down, model-based approach. In model-based approaches, the learner is actively constructing predictions of possible worlds while engaging in inference. As much past research has highlighted, which kind of learning is engaged can depend on working memory, processing speed, and the agent’s prior knowledge. But less work has focused on how the same kind of content might be learned differently, depending on whether or not a learners “mental models” are engaged, nor have factors like explanation and prediction been explicitly linked to these different processes in neurology, physiology, and behavior. In this talk, I will discuss several lines of research pointing to the role of explanation and prediction as key tools in helping children engage in “model-based” learning in causal, scientific thought. The research presented will bring together computational models with neurological, physiological, and behavioral data in infants, preschoolers, and early elementary school children, to provide an integrated account of the role of model building through prediction and explanation in learning. Specifically, the first study will present emerging evidence from our lab employing a neurological marker of uncertainty and active learning (theta response); our preliminary results (N=34 infants) provide evidence for infant’s (M=16mos) sensitivity to causally confounded and unconfounded events through differing theta response (co authored Begus). This work suggests that even infants are sensitive to uncertainty in causal explanatory events and engaging in “mental preparation” for explanatory content. The second study will focus on how engaging preschoolers (N=72) to “self-explain” a causal event (as prompted through pedagogical questions in a three-week training study) leads to better learning, generalization, and memory of scientific content in the biological domain, as compared to preschoolers who passively listen to the content or a control group (co-authored Daubert, Yu, Shafto). This work suggests the role of self-explanation in helping engage the learner in model building and learning. The third study explores the role of “thought experiments” (self-generated explanatory, predictive models) in
helping (6-7 year-old) learners engage in conceptual change in the physical domain (co- authored Bascandziev). The fourth study presents a computational model that predicts surprise
based on a learner’s prior beliefs and the evidence observed, and then compares the model to a physiological measure commonly suggested to capture surprise: pupil dilation. In this final, I
present evidence that a computational model integrating beliefs and evidence predicts school- aged children's (N=95, aged 6-9 years-old) surprise of a water-displacement event. Critically,
this prediction only bears out when participants were encouraged to predict an outcome prior to observing it, suggesting the critical role of explanatory, top-down model building in surprise and belief revision (co-authored Brod, Theobald, Bascandziev, Colantonio). Taken together, this work will support the claim that learning is supported by active prediction and explanation by engaging top-down, model-based mechanisms.
Cognate Society Session10:15 AM - 11:45 AM (America/New_York) 2021/11/11 15:15:00 UTC - 2021/11/11 16:45:00 UTC
The processes by which we discover or develop novel scientific theories is sometimes argued to fall outside of the scope of rationality; most famously, Popper argued that there was no “logic of scientific discovery.” In this talk, I will argue that there are at least two different paths by which we can (and do) rationally discover novel scientific theories. One route is through the use of automated discovery methods from data and phenomena, including modern machine learning algorithms. A second route is through rational reasoning grounded in the intertheoretic constraints imposed by other (tentatively accepted) scientific theories. For each pathway, I will give examples of rational theory discovery within cognitive science to argue that considerations of rationality play a role in the actual scientific reasoning practices. Given these two different (potentially) rational discovery pathways, I will then consider the explanations provided by each. In particular, the first pathway principally yields theories that can explain patterns in our data and phenomena, while the second pathway primarily leads to theories that can explain the content of, and connections between, disparate elements of scientific theory. Of course, a novel theory that results from either pathway can potentially provide explanations of the other type, but only after additional scientific effort. I (re)use the cognitive science examples to vividly illustrate this difference in explanatory target. Finally, I argue—in light of the pragmatic, future-directed nature of explanations—for a “meta-rationality” constraint on scientific theory discovery: namely, our selection of one rational method (rather than another) is itself subject to rational evaluation.
Navigating the Conflict Between Science and Intuition
Cognate Society Session10:15 AM - 11:45 AM (America/New_York) 2021/11/11 15:15:00 UTC - 2021/11/11 16:45:00 UTC
Before learning scientific theories, we form intuitive theories of the same phenomena. Intuitive theories provide us with explanations and predictions, like scientific theories, but they rely on categories that play no role in science. In the domain of biology, for instance, children intuitively identify life with self-directed motion, leading to the misconception that sun and the clouds are alive but plants are not. In physics, children intuitively identify matter with perceivability, yielding the misconception that heat and light are material substances but gasses are not. In this presentation, I will explore how intuitive theories of life and matter automatically compete with their scientific successors, as revealed by a statement-verification task. Participants are asked to verify scientific statements as quickly as possible. Some statements are consistent with intuitive theories, such as “tigers are alive,” which is both scientifically true and intuitively true, or “rocks are alive, which is both scientifically false and intuitively false. Other statements are inconsistent with intuitive theories, such as “oaks are alive,” which is scientifically true but intuitively false, or “the sun is alive,” which is scientifically false but intuitively true. Across concepts and domains, the latter type of statement is verified less accurately and more slowly than the former, indicating that intuitive theories are not erased by scientific theories but coexist with them instead, yielding internal conflict when the two theories provide divergent inferences or interpretations. Using this paradigm, we have found the people can learn to verify scientific statements more accurately, but they cannot learn to verify them more quickly. Scientists verify scientific statements more accurately than non-scientists but still take longer to verify counterintuitive statements relative to intuitive ones. Priming people to think more scientifically, with diagrams and models, increases response accuracy but has no effect on response times. The same holds for training people to think more scientifically, by providing targeted instruction in the relevant domain. Instruction increases the accuracy of participants’ verifications but has little effect on speed. That is, instruction shrinks the gap between intuitive and counterintuitive statements in terms of how accurately they are verified but does not affect the gap in how quickly they are verified. This finding holds for both adults and children, even preschool-aged children who are just beginning to construct scientific theories of the natural world. The finding that accuracy is malleable when reasoning about counterintuitive scientific ideas but speed is not suggests that intuitive theories are activated automatically by the phenomena they were meant to explain, even for those practiced at reasoning scientifically. They also suggest that science education should not focus on erasing intuitive theories or resolving the conflict between intuitive and scientific theories but should instead focus on providing skills to prioritize science over intuition. Such skills include inhibitory control, set-shifting ability, and cognitive reflection, all of which have been shown to facilitate scientific reasoning in contexts where students are prone to rely on pre-scientific intuitions instead.