Research in philosophy of causation has traditionally focused on the fundamental question of how to distinguish causation from correlation. In addition to this focus, philosophers and scientists have more recently turned their attention to a new and important topic–how to distinguish among different types of causal relationships. This symposium proposal contributes to this endeavor by analyzing four novel features of type-causal relationships, which have yet to receive significant attention in the literature. These features include: the degree to which causal relationships exhibit (i) [ir]reversibility, (ii) invariance, (iii) coherence, and (iv) machine-likeness. These features are examined in four separate talks (one co-authored), provided by four philosophers and one scientist, who all conduct research on causation and causal reasoning. Each of these talks examine one of the four features above by providing an analysis of how this feature should be understood, what distinction within causation it clarifies, and why this distinction matters to scientific practice, as well as everyday causal reasoning.
20211111T130020211111T1500America/New_YorkDistinctions within Causation: Irreversibility, Coherence, Invariance and Machine-Likeness
Research in philosophy of causation has traditionally focused on the fundamental question of how to distinguish causation from correlation. In addition to this focus, philosophers and scientists have more recently turned their attention to a new and important topic–how to distinguish among different types of causal relationships. This symposium proposal contributes to this endeavor by analyzing four novel features of type-causal relationships, which have yet to receive significant attention in the literature. These features include: the degree to which causal relationships exhibit (i) [ir]reversibility, (ii) invariance, (iii) coherence, and (iv) machine-likeness. These features are examined in four separate talks (one co-authored), provided by four philosophers and one scientist, who all conduct research on causation and causal reasoning. Each of these talks examine one of the four features above by providing an analysis of how this feature should be understood, what distinction within causation it clarifies, and why this distinction matters to scientific practice, as well as everyday causal reasoning.
Symposium Paper AbstractsCausality01:00 PM - 01:30 PM (America/New_York) 2021/11/11 18:00:00 UTC - 2021/11/11 18:30:00 UTC
Recent work in the philosophy of causation literature examines distinctions among different varieties of type-causal relationships (Woodward 2010; Weber 2017; Blanchard et al. 2018; Ross 2018). This talk examines a new distinction, which has yet to be discussed in this literature–whether causes produce their effects in an irreversible or reversible manner. In order to motivate this distinction, consider the common assumption that if changing X from 0 to 1 changes Y from 0 to 1, then returning X to 0 will return Y to 0. Many causal relations are (or are assumed to be) reversible in this sense. If a patient's high salt diet has caused her blood pressure to increase, then reducing her salt consumption should reverse this, causing her blood pressure to go down. This sort of reversibility is also standardly assumed in the causal modeling literature, when, for example, linear or Boolean equations are used to represent causal relations. However, many causal relations are not reversible, including many of those used as examples in the philosophical literature. When Suzy's thrown rock causes a bottle to break, her rock cannot be "unthrown" in a way that restores the broken bottle.
We explore some consequences of the irreversible-reversible distinction for understanding scientific practice and for its possible improvement. First, causes that operate irreversibly often involve a salient change from an earlier state and are followed by a salient change in the effect–the rock changes from not being thrown to being thrown, the bottle from not broken to broken. This contrasts with causes that remain continuously in the same state and maintain or sustain an effect in the same state, as when a table continually supports a book. This difference in salience often leads us to pay more attention to irreversible causal relations, as one sees with the examples that have dominated philosophical discussion. This has both bad and good consequences. On one hand, it can lead to the neglect of sustaining or continuous causal relations when they are important. On the other hand, when an effect can not be reversed once triggered by its cause and the effect itself is undesirable, it is very reasonable that we should pay special attention to this fact and to avoid triggering the causes in question. A first benefit of this distinction is that it helps explain why irreversible causes receive more attention that reversible causes in many scientific fields. One main reason for this, is that if an effect can not be reversed once triggered by its cause, we are more careful in choosing when to set off these causes. A second benefit comes from appreciating how systems with reversible and irreversible causes should be treated differently when it comes to collecting and interpreting data from these systems (Lieberson 1985). Third, this distinction matters for various recommendations that are made for controlling effects, such as disease outcomes. Irreversible causes will motivate preventive strategies, while reversible causes should lead scientists to suggest both preventive and curative measures.
Representations of Invariance in Human Causal Induction
Symposium Paper AbstractsCausality01:30 PM - 02:00 PM (America/New_York) 2021/11/11 18:30:00 UTC - 2021/11/11 19:00:00 UTC
A critical distinction in causation is that of invariance – the extent to which a putative cause acts "the same" across the many compounds and contexts in which it occurs. The feature of invariance is central to the reproducibility of scientific results, as well as to the generalization of human causal knowledge from learning contexts to novel situations. For example, a pharmaceutical drug found to reduce hypertension in adult males may, or may not, have a different effect on females, or on male adolescents, or may behave differently when combined with a different release mechanism; the implications for clinical, commercial and personal decision making are profound. Notably, psychological theories of human causal induction differ on how invariance is defined, as well as on whether occlusion of invariance by confounding prevents causal discovery. In particular, causal invariance may be defined with respect to unobservable causal influences, formalized as their noisy-logical integration, or on the observable, additive, difference made to the state of the effect. Moreover, while Bayesian models of causal induction predict that occlusion of invariance by confounding will generate high levels of uncertainty in judgments, associative, error-driven, models do not. I will discuss a series of behavioral and neuroimaging studies aimed at assessing how naïve human reasoners define independent causal influence, and how deviations from the independent influence and independent occurrence of putative causes modulate uncertainty in causal inferences. I will show that, when asked to make judgments about the influences of a set of fictitious putative causes, reasoners predominantly adopt a noisy-logical definition of independent influence, and report high levels of uncertainty for both interacting and confounded causes. At the neural level, activity in dissociable substrates scales with noisy-logical and linear integrations of causal influences respectively, and with Bayesian vs. associative uncertainty signals. I will argue, based on these results, that human reasoners make tacit assumptions that align with normative accounts of causal induction, and with basic principles of scientific inference.
Symposium Paper AbstractsCausality02:00 PM - 02:30 PM (America/New_York) 2021/11/11 19:00:00 UTC - 2021/11/11 19:30:00 UTC
The ability to control phenomena by targeted interventions is thought to be an important criterion for selecting explanatorily relevant causes from background conditions (Ross forthcoming). Different kinds of causal control have been examined in the recent literature, for example, control that is fine-grained as opposed to switch-like, more or less stable with respect to background conditions, or more or less proportional with respect to its effect (Woodward 2010). In this contribution, I explore another feature that some control variables have, namely the coherence of their causal response. The basic phenomenon I am after is this: Biologists have discovered molecules that can be manipulated such that they change the values of a large number of downstream variables in such a way that they together produce a coherent response in a biological system. I shall refer to causal variables with such control properties as "coherent control variables" or "CCVs". By "coherent" I mean that the CCVs' downstream variables are caused to take on a distribution (or a time function) of values that allow the system in question to perform a specific function at some defined rate or to assume a specific developmental pathway. In such cases, the control variable typically selects from a range of alternative physiological states (e.g., increased, constant or decreased heart rate) or alternative developmental pathways (e.g., neural versus epidermis development). In order for this to work, the values that the causal descendant variables take in response to the value of the control variable must somehow be tuned to each other such as to perform some biological function or activity. It is this kind of tuning that my present analysis tries to capture. Biological examples of what I have in mind include systems like (1) hormones functioning in growth control, physiological regulation and integration, (2) signal-transducing molecules like protein kinases and phosphatases and the "second messengers" such as cyclic AMP, inositol phosphate (IP3) or Calcium ions that control them, (3) inducers and gradient-forming morphogens in embryonic pattern formation. CCVs clearly require a certain structure of the underlying causal network, with a lot of direct causal descendants. But in addition, their values somehow constrain the distribution of values taken by their descendants such that they perform a biologically significant function at some specific rate or undergo a certain developmental pathway or another. (Of course, technological systems often have similar control variables). In my talk, I provide a formal analysis of CCVs based on causal Bayes nets and argue that such variables often take center stage in the quest for understanding complex systems, without necessarily having any of the other properties that are thought to distinguish causal relations from others (stability, invariance, probability, proportionality, specificity).
References Ross, Lauren N. forthcoming. "Causal Control: A Rationale for Causal Selection." In Minnesota Studies in Philosophy of Science Volume on Philosophical Perspectives on Causal Reasoning in Biology. Minneapolis: University of Minnesota Press. Woodward, James. 2010. "Causation in Biology: Stability, Specificity, and the Choice of Levels of Explanation." Biology and Philosophy 25: 287–318.
Symposium Paper AbstractsCausality02:30 PM - 03:00 PM (America/New_York) 2021/11/11 19:30:00 UTC - 2021/11/11 20:00:00 UTC
Biological Machines: A Sober Defense Arnon Levy
Analogies between biological systems and manmade machines are common across a wide variety of contexts, from molecular biology to physiology, guiding investigative and explanatory practice in important ways. Increasingly, however, they have come under scrutiny from both scientists and philosophers (Kirchner et al. 2000; Karagiannis et al. 2014; Nicholson, 2013, 2019). Critics claim that living systems differ in fundamental ways from engineered machines, and that the analogy is obsolete given recent experimental and theoretical advances. My goal in this paper is to clarify and evaluate the machine analogy, especially in cellular and molecular contexts, offering a qualified defense of it. The main criticisms of the machine analogy appear to be: (1) It exaggerates the degree to which biological systems are deterministic. (2) It incorrectly assumes that biological mechanisms consist of a set menu of parts and a fixed layout. (3) It obscures the fact that biological systems at the molecular level operate in a thermal, rather than a macroscopic-mechanical, environment. (4) It is incompatible with the fact that biological systems are oftentimes self-organizing and dynamically stable. I begin by arguing that, in general, there is no answer to a question of the form "is X (the cell, a tissue, the whole body) a machine?" Rather, we should ask whether, with respect to a given behavior or features, the underlying causal system is machine-like. This holds for ordinary manmade machines: a toaster is a machine with respect to toasting bread, but not with respect to exerting gravitational force on the kitchen counter. Thus, we should ask how suitable a machine analogy is, relative to specified explananda. On this basis, I suggest that we fix the meaning of 'machine' in accordance with the explanatory strategies that it is associated with. Specifically, I will rely on previous work, wherein I suggested a view of machine-likeness as tied to a system's degree of division of causal labor (a notion I label 'casual order'). This, in turn, is closely associated with the potential for providing decompositional explanations (Levy, 2014). I use this understanding of machine-likeness-as-order to address the criticisms outlined above. I will argue that advocates of the machine analogy should not be too worried about (1) and (3) since indeterminism and reliance on thermal energy are fully consistent with the system in question exhibiting a division of labor. Criticism (2), however, poses a potential challenge for the analogy, inasmuch as it threatens the identification of stable functional roles, an essential aspect of decompositional explanation. Meanwhile, criticism (4) is relevant primarily for developmental questions (broadly construed), a context in which machine analogies have less a priori plausibility. I illustrate these claims by looking at recent work on molecular motors on the one hand, and reaction-diffusion models of pattern formation on the other hand.