Speakers

Keynote Speakers:

Judith Degen Wonky Worlds: Modeling Prior Beliefs and Common Ground in Pragmatic Inference
Abstract: World knowledge enters into pragmatic utterance interpretation in complex ways and may be defeasible in light of speakers’ utterances. While effects of world knowledge on syntactic and semantic processing are well-established, there is to date a surprising lack of systematic investigation into the role of world knowledge in pragmatics. Here, we show that a state-of-the-art Bayesian model of pragmatic interpretation within the Rational Speech Act framework greatly overestimates the influence of world knowledge on the interpretation of utterances like Some of the marbles sank. We extend the model to capture the idea that listeners have uncertainty about the background knowledge the speaker is bringing to the utterance situation – and in effect., about the beliefs assumed to be in common ground. This extension greatly improves model predictions of listeners’ interpretation and also makes good qualitative predictions about listeners’ judgments of how ‘normal’ the world is in light of a speaker’s statement. We discuss alternatives to assuming malleable prior beliefs, including assuming that the speaker is uncooperative and that the speaker could have remained silent, and show for both that they are not the source of the compressed effect of world knowledge on utterance interpretation. We argue that this case study is an excellent demonstration of how combining behavioral experimentation and probabilistic computational modeling allows for gaining otherwise inaccessible insights into the interplay between language and cognition.

Keith Stenning An Attempt at Certainty about which Kinds of Uncertainty we are Dealing with?
Abstract: A recent experiment conducted with Laura Martignon of the MPI in Berlin explores an integration  of Logic Programming (LP) and fast and frugal decision heuristics (FFH) to provide a probability-free cognitive process model of naive causal reasoning in `interpretational uncertainty’.  This talk will use this experiment as an illustration to propose a more careful approach to distinguishing kinds of uncertainty.  Only some kinds of uncertainty can be modelled by probability, and LP models are not some sort of cognitive approximation to probabilistic models.  So it becomes an important question  which reasoning situations humans face are appropriately modelled by which  formal/computational frameworks. In other words, the talk will attempt to be more precise about the great philosopher Donald Rumsfeld’s distinction between the known unknowns and the unknown unknowns.

Contributed Talks:

Daniel Lassiter and Noah Goodman Nested and Informative Epistemics in a Graphical Models Framework
Abstract: We propose a new semantics and pragmatics for epistemic statements which builds on the systems of Yalcin and Moss, but offers several empirical advantages. The key improvements stem from (a) modeling information states using probabilistic graphical models, and (b) a new method of treating probabilities as ordinary random variables, making it possible to condition on probability statements such as Rain is likely [≈ P(rain > .5)]. This feature makes it possible to account for the dynamic effects of epistemic sentences maintaining a thoroughgoing Bayesianism, with conditioning as the only update operation. Nested epistemic statements are also given a natural interpretation in terms of higher-order probability, which is implicitly defined once probabilities are treated as random variables. This approach forges new connections between modal semantics and a framework for knowledge representation which is highly influential in psychology, artificial intelligence, and philosophy, but has previously had little impact in semantics and pragmatics.

Henk Zeevat Presupposition and Causal Inference
Abstract: This is a fragment of a 40 page draft paper for a linguistic audience on an important mistake in the account of presuppositions. The linguistics is omitted but for a small resume in the center, but the attempt at developing a version of Bayesian interpretation for dealing with this problem is intact. The paper develops NLI as Bayesian interpretation within classical update semantics and provides a basis for presupposition projection as a special case of bridging inferences, causal and identity inferences. The technical contribution is the attempt to develop a dynamic stochastic comparison operator $\varphi < \psi$ based on a set of distributions that learn the strength of causal connections on the basis of incoming updates. The same ideas can be applied to a whole series of problems in NL semantics and pragmatics.

Satoru Suzuki Intuitionistic-Bayesian-Semantic Foundations of First-Order Logic for Generics
Abstract: Generics are used frequently in various natural languages. Cohen’s theory (1999) is one of the most promising theories of generics. Cohen proposes a probabilistic account of generics. Leslie (2007, 2008) points out the three shortcomings of Cohen’s theory.
Asher and Pelletier (2013) point out the five more shortcomings of Cohen’s theory. The aim of this paper is to propose a new version of logic for generics-First-Order Logic for Generics (FLG)-that can overcome all of the eight shortcomings. To accomplish this aim, we provide the language of FLG with an intuitionistic-Bayesian semantics.

Iris van de Pol and Ronald de Haan On the Approximability of Optimization Theories of Cognition
Abstract: Many cognitive scientists recognize computational tractability as an important condition for the plausibility of computational theories of cognition. A common way of dealing with the intractability of a theory is by assuming the existence of approximation algorithms that can tractably approximate the theory at the algorithmic level. We strengthen existing evidence that shows that it is not straightforward that such tractable approximation algorithms indeed exist, and that therefore a case-by-case investigation is called for. We use ideas and results from subexponential-time complexity to extend a formal framework for distinguishing between theories that can and cannot be approximated tractably. We adopt a more liberal notion of approximation (allowing less accurate approximation), and our results apply to a larger class of approximation algorithms. Using a case study, we illustrate how this extended framework can be used to argue that there are relevant theories for which the existence of tractable approximation algorithms is arguable.

Michael Henry Tessler A prevalence-based account of generic language
Abstract: Generic utterances (e.g. “Dogs bark”) are ubiquitous in natural language. Despite their prevalence, the meanings of generic statements are puzzling to formal approaches. For example, what percentage of the category must display the property (i.e. what threshold must the prevalence cross) for the generic to be true? In this work, we formalize a prevalence-based semantics with the threshold for acceptance represented as an unknown property of the language; the threshold is actively reasoned about in context by the listener. We compare our model to empirically elicited acceptability judgments of generic utterances. We show how the prevalence of a property as it is typically examined —within a category alone — cannot account for the range of truth judgments observed. A Bayesian language understanding model in the Rational Speech-Acts framework that reasons about the prevalence of the property both within- and across- categories does account for the data. Despite being acceptable for a range of prevalences, generic statements are additionally puzzling because they are often interpreted as applying to nearly all of a category. We replicate an experimental finding confirming this intuition and show how the prevalence-based model predicts this asymmetry between truth judgments and interpretations. We argue that the semantics of generic statements can treated as scalar with uncertainty over the threshold for prevalence.

Jakub Szymanik Probabilistic Mental Logic for Human Reasoning
Abstract: The aim of this project is to study the mental logic possibly underlying human reasoning. In the talk I will focus on probabilistic natural logic for syllogistic reasoning that captures some aspects of human performance on inferential tasks. We used empirical data and machine learning techniques to assign a difficulty weight to each inference rule and then compared the complexity of the minimal proof with cognitive difficulty of particular syllogistic reasonings. Time permitting, I will also discuss a learning system for logical reasoning.