The kick-off workshop for the project Cognitive Semantics and Quantities will take place September 28 and 29 in Amsterdam. The talks are open to the public. See below for the program. To view abstracts, click on the title of the talk.
Thursday, September 28
Location: the Doelenzaal of the University Library, Singel 425
- 9:00 – 9:20: Welcome by Jakub Szymanik
- 9:20 – 10:00: Michael Glanzberg (Northwestern University): The Cognitive Roots of Adjectival Meaning
In this paper, I illustrate a way that work in cognitive psychology can fruitfully interact with truth-conditional semantics. A widely held view takes the meanings of gradable adjectives to be measure functions, which map objects to degrees on a scale. Scales come equipped with dimensions that fix what the degrees are. Following Bartsch and Vennemann, I observe that this allows dimensions to play the role of lexical roots, that provide the distinctive contents for each lexical entry. I review evidence that the grammar provides a limited range of scale structures, presumably dense linear orderings with a limited range of topological properties. I then turn to how the content of the root can be fixed. In the verbal domain, there is evidence suggesting roots are linked to concepts. In many cases for adjectives, it is not concepts but approximate magnitude representation systems that fix root contents. However, these magnitude representation systems are approximate or analog, and do not provide precise values. I argue that the roots of adjectives like these provide a weak, discrimination-based constraint on a grammatically fixed scale structure. Other adjectives can find concepts to fix roots, which can support a well-known equivalence class construction which can fix precise values on a scale. I conclude that though adjectives have a uniform truth-conditional semantics, they show substantial differences in the cognitive sources of their root meanings. This shows that there are (at least) two sub-classes of adjectives, with roots fixed by different mechanisms and with different degrees of precision, and showing very different cognitive properties.
- 10:00 – 10:40: Shane Steinert-Threlkeld (Universiteit van Amsterdam): Learnability and Semantic Universals: a Recurrent Neural Network Approach
(joint work with Jakub Szymanik)
One of the great successes of the application of genralized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this talk, we explore the hypothesis that many semantic universals associated with quantifiers exist because quantifiers satisfying the universal are easier to learn. While the idea that learnability explains universals is not new, we present a model of learning — back-propogation through a recurrent neural network — which can make good on this promise. We present a few experiments training such a network to learn to verify quantifiers and discuss the prospects for explaining monotonicity, logicality, and conservativity universals.
- 10:40 – 11:20: Fabian Schlotterbeck (Universität Tübingen)Processes involved in the verification of modified numerals
In recent work on quantifier processing, it is assumed that quantifying expressions are associated with canonical verification procedures which consist of various subcomponents and provide a window into the underlying compositional semantic representations. At the same time, it is acknowledged that the verification of a given expression is not limited to one single procedure but may rather rely on a set of potential procedures more or less suitable in different situations. The present talk considers subprocesses that are involved in the verification of modified numerals. On the basis of experimental data from sentence-picture verification, factors are discussed that affect which subprocesses are employed during verification. A special focus lies on differences between upward and downward entailing modified numerals, but it is also highlighted what general constraints on theories of quantifier processing derive from the experimental results.
- 11:20-11:40: Coffee Break
- 11:40 – 12:20: Stephanie Solt (Leibniz-Zentrum Allgemeine Sprachwissenschaft) Do quantifiers count?
- 12:20 – 13:00: Arnold Kochari (Universiteit van Amsterdam): Is the generalized magnitude representation system involved in processing vague adjectives and quantifiers?
When a speaker utters a phrase like “They had a small rabbit at home” or “There were many rabbits in the park”, they are communicating information about a size or a quantity that they perceptually observed and assessed. For example, for the former they need to have estimated the size of the rabbit and made a comparison with some other size. For the latter, they need to have counted/estimated the quantity of rabbits and, again, made a comparison with some other quantity. In this research project, I am interested in the interface between the cognitive system behind these processes of estimation and comparison of magnitudes, and usage of vague adjectives and vague quantifiers (the so-called adjectives of quantity) in language.
Much research in cognitive psychology has been devoted to how people process magnitudes such as size, length, duration, quantity, etc. Due to a shared set of properties and interference effects between them, it has been suggested that they are all underlyingly processed by the same system, a generalized magnitude representation system (GMS, which can be seen as a generalized version of Approximate Number System). I take what we know about GMS as a starting point and look into how it gets involved in processing of vague adjectives and quantifiers.
In this talk, I will first discuss parallels between GMS (ANS) and properties of vague adjectives. I will then present a set of experiments in which I look into whether processing vague adjectives makes use of GMS. I find that processing a physical size of an object can be disturbed by the meaning of the simultaneously presented vague adjectives. Such interference suggests that GMS is indeed involved.
- 13:00 – 14:00: Lunch Break
- 14:00 – 14:40: Napoleon Katsos (Cambridge University): How children learn `some,' `all,' and `most,' words
We can all imagine how children learn to count: Children receive substantial training from caregivers and they start with learning ‘one’, proceeding in order of increasing cardinality (“…two, three, four…”). But what about other words of quantity such as ‘all’, ‘some’, ‘most’, or ‘none’? No-one teaches young children explicitly what these words mean or how they are used. So, what constraints the order in which children learn them?
In this presentation, I will share recent findings from a crosslinguistic investigation in the acquisition of quantity expressions (such as the English ‘all’, ‘none’, ‘some’, ‘some…not’ and ‘most’) in 31 languages, representing 11 language types, by testing 768 5-year-old children and 536 adults (Katsos, et al., 2016). We found a cross-linguistically similar order of acquisition of quantifiers, that we attempt to explain in terms of four factors relating to their meaning and use. In addition, exploratory analyses reveal that language- and learner-specific factors, such as negative concord and gender, are significant predictors of variation.
Besides sharing the main findings, I will explore the cognitive and perceptual biases that possibly underlie the universality of the findings. The audience is very welcome to contribute ideas from their own areas of expertise.
- 14:40 – 15:20: Olivier Bott (Universität Tübingen): Empty-Set Effects in Quantifier Interpretation
(joint work with Fabian Schlotterbeck and Udo Klein)
In recent work we have proposed a cognitively grounded quantification theory (Bott, Klein & Schlotterbeck 2013; Bott, Schlotterbeck & Klein, submitted). A central distinction made in this theory concerns whether a quantifier has the empty set among its witness sets (is an ‚empty-set quantifier’) or not. In this talk I will motivate this distinction by reviewing some relevant findings from number cognition. I will then present data from two picture verification experiments providing evidence for extraordinary processing difficulty of empty-set quantifiers in empty-set situations, i.e. situations in which the scope of an empty-set quantifier consists of the empty set. The experiment further shows that empty-set effects must be distinguished from effects due to a quantifier’s monotonicity. In a second experiment we show that the findings for simply quantified sentences generalize to iterations of empty-set vs. non-empty-set quantifiers in doubly quantified sentences. The talk concludes by discussing yet untested predictions for other interpretations of multiply quantified sentences such as cumulative readings.
- 15:20 – 16:00: Hadas Kotek (New York University): Quirks of superlative 'most'
(Joint work with Yasutada Sudo and Martin Hackl)
Some recent theoretical and experimental work has argued that the determiner ‘most’ is a complex expression composed of the superlative morpheme ‘-est’ and an element such as ‘much/many’ or ‘more’ (e.g. Hackl 2009; Gajewski 2010; Solt2011; Kotek et al 2011, 2012, 2015; Pancheva & Tomaszewicz 2012; Szabolcsi 2012; Krasikova 2012; Coppock & Josefson 2014). In this talk I present some experimental evidence to this effect, along with an analysis of ‘most’ as a superlative construction: ‘most of the dots are blue’ is true just in case that there are more blue dots than yellow dots, and more blue dots than red dots, and more blue dots than green dots, etc… However, I show that ‘most’ exhibits peculiar behavior unlike other superlatives: it exhibits a sensitivity to context that goes beyond what is predicted from its semantics. The distribution of non-blue colors, as well as the precise visual scene associated with the blue dots, affects speakers’ verification strategies and judgments. I discuss implications of these findings for the theory of ‘most’.
- 16:00 – 16:20: Coffee Break
- 16:20 – 17:00: Raffaella Bernardi and Sandro Pezzelle (University of Trento): Quantifiers and proportions in language and vision: insights from behavioral and computational studies.
In this talk, we present a number of behavioral and computational experiments focused on grounding quantifiers (‘few’, ‘most’, ‘all’) in Vision. On the computational side (where little attention has been traditionally paid to so-called function words), we show that state-of-the-art computer vision attention mechanisms coupled with the basic insights from formal semantics give the best performance on the task of quantifying over visual scenes. Moreover, we explore the interplay between quantifiers and a range of quantity expressions, such as cardinals and proportions. On the behavioral side, we present the results of two experiments aimed at exploring both the abstract and the grounded representation of quantifiers. We show that, in their abstract representation, quantifiers are ordered on a non-linear scale, whereas they are best described by the proportion of target objects when referring to entities which are grounded in visual contexts.
- 17:00 – 17:40: Camilo Thorne (University of Stuttgart): Semantic Complexity and Corpus Analysis
Studies in cognitive science and linguistics (e.g., picture verification tasks) indicate that some natural language expressions such as, for instance, proportional quantifiers vis-a-vis Aristotelian quantifiers, take longer to understand and process by speakers. Semantic complexity provides an explanation for this phenomenon in terms of the computational cost related to the (formal) semantics of linguistic expressions: the higher the cost, the longer they take. It is however still unclear if semantic complexity holds a similar predictive power with respect to language production. In this talk we outline a preliminary answer to this question involving corpus analysis, viz., collecting and analyzing statistics derived from corpora that to some extent indicate that semantic complexity influences the distribution of a number of English constructs. We outline several methodologies that resort to several degrees of semantic analysis ranging from simple pattern matching to deep semantic parsing.
- 20:00: Conference Dinner (location TBA)
Friday, September 29
Location: the Doelenzaal of the University Library, Singel 425
- 9:20 – 10:00: Yosef Grodzinsky (Hebrew University of Jerusalem): Quantifier Polarity, Comparatives, and Verification Strategies
The semantic literature is replete with implicit DE operators. In this talk, I will present a new complexity metric for DE-ness processing Cost (DEC), developed mostly due to considerations that arise when the processing of comparative quantifiers is considered. I will derive some of predictions (by coupling DEC with assumptions about the structure of comparative constructions), and present some new experimental results that bear on this matter. Finally, I will talk about verification. Barwise & Cooper famously proposed a sampling based, “witness set”, verification strategy. I will evaluate their proposal and related ones, in an attempt to explain results that remain unaccounted for by this approach.
- 10:00 – 10:40: Stefan Heim (RWTH Aachen): If so few are „many“, how many are “few”?
The aim of the present project was to investigate whether the meaning of the quantifiers “many” and “few” can be altered by changing the threshold value of their associated degrees, and most importantly, whether this change in meaning carries over to the respective polar opposite. Two behavioural experiments in healthy young adults provided supportive evidence for this question. Next, we showed that the source of this effect was located in Broca’s region in the left inferior frontal cortex, a region known to be relevant for semantic operations in other paradigms. This finding was further corroborated in two experiments with patients suffering from atrophy in the frontal cortex, who failed to show any carry-over effects in the experiment.
- 10:40 – 11:20: Giosuè Baggio (Norwegian University of Science and Technology): Quantifiers, models, and the role of parietal cortex in a neural theory of interpretation
Several imaging studies have reported activations of inferior parietal cortex, among other brain regions, in response to quantified phrases and sentences. An attractive explanation of these results is that at least some types of natural language quantifiers directly engage the brain’s number system. Using fMRI evidence, I will suggest that a more general connection can be established between the abstract magnitude system in parietal cortex, representing space, time and number, and core processes underlying the construction of models during reasoning and discourse processing. This line of thinking is a first step toward a neurocognitive theory of the interpretation of referring expressions in natural language and beyond.
- 11:20 – 11:40: Coffee Break
- 11:40 – 12:20: Nina Gierasimczuk (Danish Technical University)
- 12:20 – 13:00: Dariusz Kalociński (University of Warsaw):
Adaptation of meaning to environmental constraints: road to `most'
One of the driving forces of language evolution is the selection of variants which suit the communicative needs of its users. Crucially, fitness of linguistic variants may largely depend on the structure of the environment in which language is learned, transmitted and used. This hypothesis has gained some support in various domains, including kinship terms, spatial descriptions and color categories, to name just a few. However, little is known about quantifiers from this perspective. In my talk, I will argue that the meaning of `most’ may be viewed as an adaptation of language to general communicative principles and distributional properties of the environment such as normality.
- 13:00 – 14:00: Lunch Break
- 14:00 – 14:40: Jakub Dotlacil (Universiteit van Amsterdam): Proportional and non-proportional quantifiers in ACT-R
In this ongoing work, I will discuss how a cognitive architecture, ACT-R, makes differing predictions for non-proportional (more than three, less than ten…) and proportional quantifiers (more than half, most…). In particular, it can be shown that the verification of the former group requires just procedural knowledge and planning functions, while the verification of the latter group is impossible without the interaction of declarative and procedural knowledge. The distinction between applied cognitive sub-components will be used to model different profiles of cognitive load during the interpretation of the two quantifier types.
- 14:40 – 15:20: Maria Spychalska (University of Cologne): Pragmatic effects on the processing of quantifiers.
In my talk I will focus on pragmatic aspects of the processing of quantifiers. Sentences with “some” allow two readings: the weak, logical interpretation (“there are some”) and the strong, pragmatic interpretation (“some but not all”). The strong reading is considered to result from a pragmatic strengthening mechanism described as the scalar implicature. Similar readings have been observed for sentences with bare numerals, e.g. “two” can be interpreted as “at least two” (the weak reading), or as “exactly two” (the strong reading). In a series of ERP experiments I compare how the weak and strong readings for “some” and numerals are processed, and discuss how the quantifier interpretation on the one hand and the context model on the other hand determine the hearer’s prediction in the incremental process of constructing sentence truth-conditional interpretation. Next, I present a new line of research where I aim to study the role of perspective taking in the processing of quantifiers and the implicature.
- 15:20 – 16:00: Barbara Tomaszewicz (University of Cologne): Proportional and Superlative ‘Most’ in Visual Verification Tasks
I will discuss how a visual verification task can be used to reveal subtle details in the semantics of quantifiers. The verification of the truth of a sentence against a display depends on both the properties of the visual scene and on the complexity of the sentence. A simpler scene requires less visual inspection and thus typically leads to a simpler verification procedure. I will present a case where ‘Proportional Most’ prevents participants from utilizing a simple verification procedure, while ‘Superlative Most’ encourages it. This suggests that the motivation for the subconscious switch in procedures is not to maximize efficiency, but to obtain the information from the visual scene as instructed by the semantics of the quantifier (Tomaszewicz 2013, Lidz et al. 2011). I will also show a comparison between ‘Superlative Most’ and Only in a visual verification task, revealing that in a language like Polish they are exactly parallel in their dependence on focus for interpretation. The results of visual verification experiments add the kind of detail to semantic analyses that cannot be obtained using solely theoretical diagnostics.
- 16:00 – 16:30: Closing Remarks and Discussion