Hybrid models of cognition: The influence of modal and amodal cues in language processing tasks

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/125620
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1256204
http://dx.doi.org/10.15496/publikation-66983
Dokumentart: PhDThesis
Date: 2022-03-28
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Psychologie
Advisor: Kaup, Barbara (Prof. Dr.)
Day of Oral Examination: 2020-10-30
DDC Classifikation: 150 - Psychology
Keywords: Kognitive Psychologie , Kognition , Sprachverstehen , Sprachverarbeitung <Psycholinguistik>
Other Keywords: Simulationsansatz des Sprachverstehens
Grounded Cognition
License: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

In recent years, a growing amount of evidence has been accumulated suggesting that at least some part of our cognition and especially language comprehension is embodied in actions, perceptions, and emotions, and is therefore multimodal in nature. While the debate in previous decades was focused on whether cognition indeed is embodied, today the discussion revolves more around the question when and how embodied representations are used and what their exact role is. The present dissertation is aimed at shedding light on this discussion, investigating the presence and the role of multimodal representations across different tasks and contexts. At first, a series of anagram-solving tasks investigating the influence of different modal cues on subsequent solving of anagrams of words associated with either the ocean (e.g., shark -> SARHK) or the sky (e.g., cloud -> CUOLD) was conducted. Combining a background picture depicting an ocean-sky scene with a shift of attention towards the upper half of the computer screen resulted in faster solution times for words associated with sky compared to words associated with ocean, while the reverse was true for a downward attentional shift. This finding was extended to emotional valence, using pictures either associated with a positive or negative emotional valence to prime words with a matching emotional valence. Indeed, anagrams were solved faster when the emotional valence of the picture matched the associated emotional valence of the solution word. Going back to the domain of vertical space, we tried to replicate the findings of the first set of experiments with another set of stimuli and the use of linguistic cues in the form of adjectives or sentences preceding the anagrams, paired with a vertical shift of attention. In contrast to pictorial cues, these linguistic cues did not influence solution times. In another set of anagram-solving experiments, we directly compared the influence of linguistic (amodal) and pictorial colour (modal) cues, using written colour words or coloured rectangles as primes for solution words associated with a certain colour (e.g., specific types of fruit or vegetable, such as “cherry”). These were solved faster when a matching colour cue was presented before the anagram, regardless of whether the colour cue was linguistic or pictorial. Combining both cues by showing a written colour word inside a coloured rectangle only facilitated anagram solving of anagrams when both cues matched the solution word, e.g. the word “green” written inside a green rectangle facilitating solution of an anagram for “cucumber”. Neither a symbolic, amodal colour word, nor a colour patch seem to be responsible for this match effect exclusively, but instead both cues seem to activate the same superimposed conceptual colour representation. In a last line of research, it was investigated in how far hemispheric differences come into play during embodied word representations. A divided visual field study by Zwaan and Yaxley (2003a), who found a match effect regarding visual-spatial relations between objects to be confined to the right hemisphere in a semantic-relatedness judgment task was replicated - with the addition of the factor response side. Word pairs were shown very briefly either to the left or right visual field in a vertical arrangement on the screen either matching or mismatching the canonical spatial relation between the word’s referents (“nose” being above “mustache” in a canonical view of a face, thus seeing “nose” written above “mustache” would be a match). In contrast to the original study there was no interaction between visual field and the spatial compatibility effect. Instead an interaction between response side and visual field and an additional main effect of match - independent of visual field - was found. This leads us to assume that multimodal concepts are not confined to either hemisphere but instead seem to be spread over large scale networks across the whole brain. Taking all of these results together, a hybrid-view of cognition seems to be the most fertile: superimposed conceptual representations seem to be at the core of semantic meaning, and can be influenced by both modal and amodal contextual information, with neither type of information exerting clear dominance over the other.

This item appears in the following Collection(s)